An example of the collective action failures that happen when millions of not-so-bright humans try to cooperate. From the BBC

US President Barack Obama had laid out his vision for the future of human spaceflight. He was certain that low-Earth orbit operations should be handed to the commercial sector - the likes of SpaceX and Orbital Sciences Corp. As for Nasa, he believed it should have a much stronger R&D focus. He wanted the agency to concentrate on difficult stuff, and take its time before deciding on how America should send astronauts to distant targets such as asteroids and Mars.

This vision invited fury from many in Congress and beyond because of its likely impact in those key States where the re-moulding of the agency would lead to many job losses - in Florida, Texas, Alabama and Utah. 

The continued provision of seed funding to the commercial sector to help it develop low-cost "space taxis" capable of taking astronauts to and from the ISS. The funding arrangements would change, however. Instead of the White House's original request for $3.3bn over three years, the Committee's approach would provide $1.3bn. (Obama had wanted some $6bn in total over five years; the Committee says the total may still be possible, but over a longer period)

Make-work bias and pork-barrel funding are not exactly news, but in this case they are exerting a direct negative influence on the human race's chances of survival. 

Opinion in singularitarian circles has gradually shifted to under-emphasizing the importance of space colonization for the survival of the human race. The justification is that if a uFAI is built, we're all toast, and if an FAI is built, it can build spacecraft that make the Falcon 9 look like a paper aeroplane.

However, the development of any kind of AI may be preceded by a period where humanity has to survive nano- or bio-disasters, which space colonization definitely helps to mitigate. Before or soon after we develop cheap, advanced nanotechnology, we could already have a self-sustaining colony on the moon (though this would require NASA to get its ass in gear).

I leave you with an artist's impression of the physical embodiment of government inefficiency, a spacecraft optimized to make work rather than to advance the prospects of the future of the human race:

A shuttle-derived concept for a heavy-lift rocket

The Space Shuttle cost $1.5 billion per launch (including development costs), so with a payload of 25 tons to LEO, that makes a cost of $60,000 per kg to orbit. Falcon 9 gets 10 tons to orbit for $50 million, making a cost of $5000/kg, and falcon 9 heavy gets 32 tons for (apparently) 78 million, a price of $2500/kg. As the numbers clearly indicate, what we need is obviously another space shuttle. 

 

How realistic is a risk-reducing colony?

Robin Hanson points out that a self-sustaining space/lunar/Martian colony is a long way away, and Vladimir Nesov and I point out that self-sustaining is unnecessary: a colony somewhere (the moon, under the ground on earth, Antarctica, etc) needs only to be able to last a long time, and be able to un-do the disaster. So Vladimir suggests a quarantined underground colony that can do Friendly AI research in case of a Nuclear/Nanotech/Biotech disaster.

 

Space colonies versus underground colonies

Space provides an inherent cost disadvantage to building a long-life colony that is basically proportional to the cost per kg to orbit. Once the cost to orbit falls below, say, $200/kg, the cost of building a very reliably quarantined, nuke-proof shelter on earth will catch up with the costs inherent in operating in vacuum. 

It was also noted that motivating people to become lunar or Martian colonists with disaster resilience as a side benefit seems a hell of a lot easier than motivating them to be underground colonists. An underground colony with the sole aim of allowing a few thousand lucky humans to survive a major disaster is almost universally perceived negatively by the public; it pattern matches with "unfair", "elitists surviving whilst the rest of us die", etc, and it should be noted that de facto no-one constructed such a colony even though the need was great in the cold war, and no-one has constructed one since, or even tried to my knowledge (though smaller underground shelters have been constructed, they wouldn't make the difference between extinction and survival). 

On the other hand, most major nations have space programs, and it is relatively easy to convince people of the virtue of colonizing mars; "The human urge to explore", etc. Competitive, idealistic and patriotic pressures seem to reinforce each other for space travel. 

It is therefore not the dollar cost of a space-colony versus an underground colony, but amount of advocacy required to get people to spend the requisite amount of money that matters. It may be the case that no realistic amount of advocacy will get people to build or even permit the construction of a risk-reducing underground colony. 

 

Rhetoric versus rational planning

The thoughts that you verbalize whilst planning risk-reduction are not necessarily the same as the words you emit in a policy debate. Suppose that there is some debate involving an existential risk-reducer (X), a space advocate (S), and a person who is moderately anti-space exploration (A) (for example, the public).

Perhaps S has A convinced to not block space exploration in part because saving the human race seems virtuous, and then X comes along and points out that underground shelters do the same job more efficiently. X has weakened S's position more than she has increased the probability of an underground shelter being built. Why? First of all, in a debate about space exploration, people will decide on the fate of space exploration only, then forget the details. The only good outcome of the debate for X is that space exploration goes ahead. Whether or not underground shelters get built will be (if X is really lucky) another debate entirely (most likely there will simply never be a debate about underground shelters)

Second, space is a rhetorically strong position. It provides jobs (voters are insane: they are pro-government-funded-jobs and anti-tax), it fulfills our far-mode need to be positive and optimistic, symbolizing growth and freedom, and it fulfills our patriotic need to be part of a "great" country. Also don't underestimate the rhetorical force of the subconscious association of "up" with "good", and "down" with "bad". Underground shelters have numerous points against them: they invoke pessimism (they're only useful in a disaster), selfishness (wanting to live whilst others die), "playing god" (who decides who gets to go in the shelter? Therefore the most ethical option is that no-one goes in the shelter, thinks the deontologist, so don't bother building it) and injustice. 

So by pointing out that space is not the most efficient way to achieve a disaster-shelter, X may in fact increase existential risk. If instead she had cheered for space exploration and kept quiet about underground options or framed it as a false dichotomy, S's case would have been strengthened, and some branches of the future that would otherwise have died survive. Furthermore, it may be that X doesn't want to spend her time advocating underground shelters, because she thinks that they have worse returns that FAI research. So X's best policy is to simply mothball the underground shelter idea, and praise space exploration whenever it comes up, and focus on FAI research. 

 

New to LessWrong?

New Comment
98 comments, sorted by Click to highlight new comments since: Today at 1:52 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Roko14y120

An interesting note on colonizing mars: Right now, we could send an unmanned mission to plant 20kT nukes under dust flows near the frozen CO2 poles. Detonation would cover the CO2 with dark dust and cause it to start subliming, setting off a chain reaction of global warming on Mars. This process is simple and cheap to start, but also inherently slow (takes decades), and it might not actually work. Once the planet has warmed up (this would take until 2020 if we started the process now, I think), algae would be able to live on the planet, converting CO2 into O2, leading to habitability.

Admittedly, the returns per dollar on this project are not as good as the best projects we do (to start with, the sheer cost of such a space mission would be a minimum of $100,000,000). But they are amenable to a much larger funding base, and have far more advocates, infrastructure, etc, so if the opportunity arises to provide positive publicity for such proposals, we should do it.

Also, compare a set of missions to warm mars up and seed it with algae at a cost of perhaps $ 5 billion to the iraq/afghan wars at $ 3000 billion.

See This article by Zubrin for more such ideas, including mirrors and super-greenhouse gasses, which seem to be an order of magnitude more expensive but more reliable.

Note that developments in robotics and synthetic biology make everything more viable.

5CarlShulman14y
Citations needed.
3Roko14y
Looking for citations makes me doubt whether the nuke idea actually works. JoshuaZ cites the place I found the idea. Zubrin's detailed paper (cited above) may partially explain why: only if the feedback coefficient is optimistically high would such an intervention work. Still, there are other methods that come with an order-of-magnitude higher price tag, such as super-greenhouse gasses. Also, we don't yet know how favorable the feedback coefficient is. However, Zubrin does propose a 125km-radius mirror to melt the ice caps, and dust would make such a project much more efficient. Building a 10,000 ton reflector in space is no mean feat, though. I still claim that we could terraform mars for less than a tenth of the cost of Iraq/Afghanistan, if the money was actually used sanely (which is itself doubtful)
3JoshuaZ14y
This article doesn't cite everything that Roko says but seems like an ok citation for the general idea of using nukes to cover the poles with dust. I don't know how much of a reliable source that is. I am under the impression that Zubrin in his book the Case for Mars suggests various methods for covering the poles with dust but doesn't discuss using nukes. Given Zubrin's general approach and the extensive nature of the book, this would suggest to me that Zubrin doesn't take the idea seriously (and he's clearly thought about Mars colonization more than anyone almost else). However, the book is old enough at this point that if this is a new idea he may just not have been aware of it at that time.
1Roko14y
If Zubrin didn't mention nukes, it may have been for PR reasons.

The mere ability to hurl things into space doesn't reduce existential risk at all. The only thing that would do that is the ability to create an independently self-sustaining economy in space. But we are so very far away from that, cheaper space-flight just isn't of much help now. Far better to just grow the world economy and tech-base faster, then make cheaper space flight when we are nearer the point where an independent space economy is feasible.

3Roko14y
Note that a moon/mars base wouldn't have to produce everything it consumed; there could be some things that just last a long time, like the terrapower nuclear reactor, or containment domes that naturally last a long time, or large stores of food or chemicals that just sit on the moon for a long time. Most importantly for Mars, the effort put into warming the planet and finding suitable synthetic life-forms to convert the atmosphere would be a one-off investment that would pay returns forever. The moon/mars base could ride out a nuclear winter, spend decades finding a cure to a bioengineered virus, and maybe even find a highly effective blue-goo to fight grey goo (though this last one is admittedly much harder, but 2 out of 3 ain't bad).
5sketerpot14y
I'm going to tech-nerd out and elaborate on some of the things you said. This is a joyous thing, so thanks for the opportunity. ;-) You can get much the same effect with any breeder reactor; indeed, if you're sending it to the moon or mars, a LFTR would probably be a better investment. But either one works. These are a very reasonable thing to expect. For building on the moon or mars with native materials, the easiest thing to do is form it into bricks and build masonry structures. Arches and domes are not only easy structures to make from bricks, but they are extraordinarily stable structures, capable of remaining in place even after taking considerable damage and wear. Plus, on the moon you would probably build very thick domes (or half-cylinders) to get enough radiation shielding. Those things would naturally be very strong.
2CarlShulman14y
I agree with Robin, and underground refuges do compete with space, in our advocacy/attention if nothing else. Heck if one is keen on exploiting the moon-landing legacy NASA budget, push for more Biosphere 2 type projects nominally in preparation for space travel.
2Roko14y
Worried about being on the other side of the debate than both Robin and Carl. I guess I was thinking of Nick Bostrom giving a speech praising the existing private space industry, and that adding some legitimacy to the claim that private spaceflight is for the greater good. In fact exactly this mechanism (with Stephen Hawking advocating instead of Nick) is actually contributing to the resurgence of space that we do have. This mechanism is cheap, and it diverts resources from places where they clearly do absolutely no good for existential risks, to somewhere where they do some small amount of good. You could also advocate the construction of an underground shelter, but as others have commented, this has emotional connotations of selfishness, so although you get more risk reduction per unit money, you get less per unit advocacy (maybe).
2JoshuaZ14y
Programs of that sort are generally not self-sufficient and isolated enough to substantially reduce existential risk. For example, a gray goo scenario will hit those about as hard as it hits anywhere else. And such programs are rarely long-term enough to be able to remain isolated for long if normal infrastructure gives out.
0Roko14y
Yes, I agree.
1timtyler14y
We can't colonise other habitats just yet - but we could get into a better position to punch out incoming meteorites.
2Vladimir_Nesov14y
This risk is relatively insignificant.
3timtyler14y
To argue that we shouldn't devote some resources to it, I think it would be necessary to argue that the disadvantages outweigh the advantages. Arguing that the advantages are relatively small doesn't really cut it when the future of civilisation is at stake.

Arguing that the advantages are relatively small doesn't really cut it when the future of civilisation is at stake.

Yes it does. That advantages are relatively small (as compared to other existential risk reduction plans) is meaningful, since it suggests reallocation of resources. Saying that we can't compromise because "the future of civilization is at stake" invites stupidity.

7torekp14y
But the comparison to other existential risk reduction plans is not the right comparison. We should compare the other uses to which the resources will likely be put. Those usually won't be existential risk reduction projects.
5CarlShulman14y
Who is this argument supposed to be addressed to?
7khafra14y
That's what always gets me about policy debates. If we're debating what an LW member who gets put in charge of the national budget should do, Nesov has it. If asking what every LW member should vote for if a referendum specifically on "allocate billions to asteroid defense" comes up, torekp is correct. I am annoyed by disagreements between people who actually agree which take this form.
2timtyler14y
So, the case you are apparently attempting to make is that all resources that could be spent on asteroid deflecting would be better spent on other things. Maybe - but that is far-from obvious. Here is what is currently happening: http://en.wikipedia.org/wiki/Asteroid_impact_avoidance
0Vladimir_Nesov14y
I'm not attempting to make that case - at some point (sufficiently low amount of resources) marginal worth of asteroid-avoidance might become competitive.
4timtyler14y
Right - OK - that's what I was saying. Some people are space cadets - and I figure some of them can probably make useful contributions. Space has some other possibilities for reducing risks too. For example, communications satellites network the world, make everyone friends - and reduce the chances of war. Of course there's also star wars - but I don't think that space can be simply written off as not helping.
0Roko14y
Agreed
1JoshuaZ14y
Is it that insignificant? Asteroids larger than 1 km hit the Earth about every 500,000 years. (source). That's in the large-scale devastation but not extinction range. Indeed, even asteroids a few 10s or 100s of meters can cause major devastation. The object that caused the Tunguska event is estimated to be between 50-80 meters , and such impacts occur every few hundred years or. Historically such events have had minimal loss of human life, but that's partially due to much less of Earth being populated by humans than it is now. So even without worrying about existential level threats, asteroid impacts pose a substantial risk to human life. As the population grows that risk will become more severe. How frequent are extinction level asteroid collisions? There's some disagreement there but the ratio seems to be between about 1 per 40-200 million years. That seems plausibly like a low risk existential threat, but how does one compare other existential risks? How does that compare to the chance of say global thermonuclear war or the probability of a uFAI arising? If one presents a very low probability on a uFAI, or put a low probability not on a uFAI but on an AI going FOOM, then this becomes potentially more relevant. Note also that one doesn't really need an existential level asteroid impact to permanently ruin human life. If we use up enough resources on Earth, especially fossil fuels, then it may not be possible to bootstrap ourselves back up to modern tech levels if the tech level is substantially reduced. As we use more limited resources this risk becomes more serious. We're not anywhere near using up the deuterium supply, but that's also limited as is the supply of U-235 which is much closer to depletion (although again, not very close). This permanent resource crunch after a major civilization setback is enough of a risk that Nick Bostrom takes it seriously (see first link given above.) An asteroid in the 3-5 km range if it hit in a bad way could cause this so
3Roko14y
Implying a 1/5000 chance this century. That's small potatoes compared to Bio, Nano, AI.
2JoshuaZ14y
Where are you getting your estimates of risk probability from? If by Nano you mean a nanotech gray goo scenario, then frankly that seems much less likely than 1/5000 in the next century. People who actually work with nanotech put that sort of scenario as extremely unlikely for a variety of reasons, including that there's too much variation in common chemical compounds to be able to make nanotech devices which acted as universal assimilators, and there's no clear way that such entities are going to get efficient energy resources to do so. Now, one might be able to argue that very intelligent AI might be able to solve those problems, but in that case, you're talking about just the AI problem and nanotech becomes incidental to that. I'm not sure what you mean by "bio"- but if you mean biological threats then this seems unlikely to be an existential level threat for the simple reason that we can see that it is very rare for a species to be wiped out by a pathogen. We might be able to make a deliberately dangerous pathogen but that requires motivation and expertise. The set of people with both the desire and capability to construct such entities is likely small, and will likely remain small for the indefinite future.
7orthonormal14y
I assume "Bio, Nano, AI" to mean "any global existential threats brought on by human technology", which is a big disjunction with plenty of unknown unknowns, and we already have one example (nuclear weapons) that could not have plausibly been predicted 50 years beforehand. Even if you discount the probabilities of hard AI takeoff or nanotech development, you'd have to have a lot of evidence in order to put such a small probability on any technological development of the next hundred years threatening global extinction.
3homunq14y
As someone who does largely discount the threats mentioned (I believe that the operationally-significant probability for foom/grey goo is order 10^-3/10^-5, and the best-guess probability is order 10^-7/10^-7), I still endorse the logic above.
3orthonormal14y
Er, maybe I was being unclear. Even if you discount a few specific scenarios, where do you get the strong evidence that no other technological existential risk with probability bigger than .001 will arise in the next hundred years, given that forecasters a century ago would have completely missed the existential risk from nuclear weapons? I agree that cataloging near-earth objects is obviously worth a much bigger current investment than it has at present, but I think that an even bigger need exists for a well-funded group of scientists from various fields to consider such technological existential risks.
5Mitchell_Porter14y
If I wanted to exterminate the human race using nanotechnology, there are two methods I would think about. First method, airborne replicators which use solar power for energy and atmospheric carbon dioxide for feedstock. Second method, nanofactories which produce large quantities of synthetic greenhouse gases. Under the first method, one should imagine a cloud of nanodust that just keeps growing until most of the CO2 is used up (at which point all plants die). Under the second method, the objective is to heat the earth until the oceans boil. For the airborne replicator, the obvious path is "diamondoid mechanosynthesis", as described in papers by Drexler, Merkle, Freitas and others. This is the assembly of rigid nanostructures, composed mostly of carbon atoms, through precisely coordinated deposition of small reactive clusters of atoms. To assemble diamond in this way, one might want a supply of carbon chains, which remain sequestered in narrow-diameter buckytubes until they are wanted, with the buckytubes being positioned by rigid nanomechanisms, and the carbon chains being synthesized through the capture and "cracking" of CO2 much as in plants. The replicator would have a hard-vacuum interior in which the component assembly of its progeny would occur, and a sliding or telescoping mechanism allowing temporary expansion of this interior space. The replicator would therefore have at least two configurations: a contracted minimal one, and an expanded maximal one large enough to contain a new replicator assembled in the minimal configuration. There are surely hundreds or thousands of challenging subproblems involved in the production of such a nanoscale doomsday device - power supply, environmental viability (you would want it to disperse but to remain adrift), what to do with contaminants, to say nothing of the mechanisms and their control systems - but it would be a miracle if it was literally thermodynamically impossible to make such a thing. Cells do it, and yes t
3NancyLebovitz14y
I'm not sure if this counts as an existential threat, but I'm more concerned about a biowar wrecking civilization-- enough engineered human and food diseases that civilization is unsustainable. I can't judge likelihood, but it's at least a combination of plausible human motivations and technology. Your tech is plausible, but it's hard to imagine anyone wanting not just to wipe out the human race, but also to do such damage to the biosphere. There are a few people who'd like the human race to be gone (or at least who say they do), but as far as I know, they all want plants and animals to continue without being affected by people.
3Mitchell_Porter14y
There are definitely people who would destroy the whole world if they could. Berserkers, true nihilists, people who hate life, people who simply have no empathy, dictators having a bad day. Even a few dolorous "negative utilitarians" exist who might do it as an act of mercy. But the other types are surely more numerous.
1Roko14y
Massive overconfidence. You need to go closer to 50/50.
1JoshuaZ14y
Where is your estimate coming from? My estimate comes from the following: 1) experts suggest that the possibility is very unlikely. For example, the Royal Society official report on the dangers of nanotech decided that this sort of situation was extremely unlikely. See report here (and good Bayesians should listen to subject matter experts) 2) Every plausible form of nanotech yet investigated shows no capability of gray gooing. For example, consider DNA nanotechnology, an area we've had a fair bit of success both with computation and constructing machines. Yet, these work only in a small range of pH values and temperatures and often require specific specialized enzymes. Also, as with any organic nanotech, they will face competition and potentially predation from microorganisms. Inorganic nanotech faces other problems, such as less energy and far fewer options for possible chemical constructions, and already reduces the grey goo potential a lot if one isn't using carbon.
6Roko14y
But how did you translate "very unlikely" to "less that 1 in 5000"? Why not say 1%? or 3%? Or 1 in 10^100? I think that I need to do an article on why one shouldn't be so keen to assign very low probabilities to events where the only evidence is extrapolative.
7Vladimir_Nesov14y
Still depends on the nature of the event (Russel's teapot). There is no default level of certainty, no magical 50/50.
2Roko14y
Sure, for cases where arbitrary complexity has been added, the "default level of certainty" is 2^-(Complexity).
2Vladimir_Nesov14y
Unfortunately, you often have to rule intuitively. How does complexity figure in the estimation of probability of gray goo? Useful heuristic, but no silver bullet.
1Roko14y
I think that one has to differentiate between the perfect unbiased individual rationalist who uses heuristics but ultimately makes the final decision from first principles if necessary, and the semi-rationalist community, where individual members vary in degree of motivated cognition. The latter works better with more rigid rules and less leeway for people to believe what they want. It's a tradeoff: random errors induced by rough-and-ready estimates, versus systematic errors induced by wishful thinking of various forms.
4FAWS14y
Less than 1 in 5000 sounds about right to me. I'm much more worried about other nano-dangers (e. g. clandestine brain washing) than grey goo. Not only is there the problem of the technological feasibility, but even if its possible there is the still larger problem of economic feasibility. Molecular von Neumann Machines, if possible, should be vastly more difficult to develop than vastly more efficient static nano-assemblers in a controlled environment (probably vacuum?) and integrated in an economy with mixed nano- and macrotech taking advantage of specialization, economics of scale etc. The static nano-assemblers should already be ubiquitous long before molecular von Neumann Machines start to become feasible. So why develop them in the first place? For medical applications specialized medical nanobots running on glucose and cheaply mass-produced in the static nano-assemblers should also beat them. They'd be useful in space and for sending to other planets, but there wouldn't be all that much money in that, and sending a larger probe with nano-assemblers and assorted equipment would also do. Since there would be no overwhelming incentive against outlawing the development of MvNM doing so would be feasible, and considering how easy it should be to scare people of the gg scenario in such a world, very likely. That pretty much leaves secret development as some sort of weapon. That would leave gg defense a military issue. Nano-assemblers should be much better at producing nano-hunters and nano-killers (or more assemblers, mining equipment, planes, rockets, bombs) than MvNM more of themselves, and nano-hunters and nano-killers much better at finding and destroying them, and there'd also be the option of using macroscopic weapons against larger concentrations.
3Roko14y
The original discussion was not concerned with the dangers of grey goo per se, but with any extinction risk associated with nanotech. Remember, the original question, the point of the discussion, was whether asteroids were irrelevant as an x-risk. So whilst you make good points, it seems that we now have a lost-purpose debate rather than a purposeful collaborative discussion.
3FAWS14y
Other nano-risks aren't necessarily extinction risks, though. And while I'm sort of worried that someone might secretly use nano to rewire the brains of important people and later of everyone to absolute loyalty to them (an outcome that would be a lot better than extinction, but still pretty bad) or something along those lines it doesn't seem obvious that there is anything effective we could spend money on now that would help protect us, unlike asteroids. At least at the levels of spending asteroid danger prevention could usefully absorb.
4Roko14y
But now you have to catalogue all the possible risks of nanotech, and add a category for "risks I haven't thought of", and then claim that the total probability of all that is < 1/5000. You have to consider military nanotech. You have to consider nano-terrorism and the balance of attack versus defence, you have to consider the effects of nanotech on nuclear proliferation (had you thought of that one?), etc etc etc. I am sure that there are at least 3 nano-risk scenarios documented on the internet that you haven't even thought of, which instantly invalidates claiming a figure as low as, say, 1/5000 for the extinction risk before you have considered them. This argument reminds me of the case of physicists claiming to have an argument showing that the probability of an LHC disaster was less than 1 in a million, and Toby Ord pointing out that the probability that there was a mistake in their argument was surely > 1 in 1,000, invalidating their conclusion that total probability of an LHC disaster was < 1 in 1 million.
4FAWS14y
The question wasn't whether nanotech is potentially more dangerous than asteroids overall, though. It was whether all money available for extential risk prevention/migitation would be better spend on nano than on space based dangers. There doesn't seem to be any good way to spent money so that all possible nano risks will be migitated (other than lobbying to ban all nano reseach everywhere, and I'm far from convinced that the potential dangers of nano are greater than the benefits). I'm not even sure there is a good way to spend money on migitation of any single nano risk. The most obvious migitation/prevention technology would be really good detectors for autonomous nanobots, whether self reproducing or not. But until we know how they work and what energy source they use we can't do all that much useful research in that direction, and spending after we know what we need would probably be much more efficient. This also looks like an issue where the military will spend such enourmous amounts once the possibilities are clear that money spent previously will not affect the result all that much. Yes, I did, that's one of the most ovious ones. It's not going to be possible to prevent a nation with access to unranium from building nuclear weapons, but I think that would be the case anyway, with or without nano. The risk of private persons building them might be somewhat increased. I'm not sure if there is any need to seperate isotopes in whatever machines pre-process materials in/for nano-assemblers, or if they lead themself to be modifiable for that. Assuming they do you'd need to look at anyone who processes large amounts of sea water, or any other material that contains uranium. Perhaps you could mandate that only designs that are vulnerable to radioactivity can be sold commercially, or make the machines refuse to work with uranium in a way that is hard to remove. I don't see how spending money now could help in any way. I 'm not sure the probability of a serious e
3JoshuaZ14y
The 1/5000 number only works for the really large asteroids (> 1 km in diameter). Note that as I pointed earlier, much smaller asteroids can be locally devastating. The resources that go to finding the very large asteroids also helps track the others, reducing the chance of human life lost even outside existential risk scenarios. And as I pointed out, there are a lot of other potential space based existential risks. That said, I think you've made a very good point above about the many non-gray goo scenarios that make nanotech a severe potential existential risk. So I think I'll agree that if one's comparing probability of a nanotech existential risk scenario compared to probability of a meteorite existential risk scenario, the nanotech is more likely. Your point about the impact of nanotech on nuclear proliferation I find particularly disturbing. The potential for nanotech to greatly increase the efficiency of enriching uranium seems deeply worrisome and that's really the main practical limitation in building fission weapons.
1Roko14y
Upvoted for updating. I agree that smaller asteroids are an important consideration for space; we expect about one Tunguska event per century I believe, which stands a ~5% chance of hitting a populated area as far as I know. Saving a 5% chance of the next Tunguska hitting a populated area is a good thing.
2whpearson14y
A lot of it seems to hinge on the probability you assign to those threats being developed in the next century.
3Vladimir_Nesov14y
Accidental grey goo doesn't seem plausible, and purposeful destructive use of nanotech doesn't necessarily fall in that category. We can have nanomachines that act as bioweapons, infecting people and killing them.
2FAWS14y
Are you disagreeing with something I said? I'm not sure nanotech would be better at killing that way than a designer virus, which should be a lot easier and cheaper (possibly even when accounting for the need to find a way to prevent it from spreading to your own side, if that's necessary). Nanotech might be able to do things that a virus can't, but that would be the sort of thing I mentioned. Anyway I don't see how we could effectively spend money now to prevent either.
2Vladimir_Nesov14y
I agree with this. I disagree that there are no clear non-goo extinction risks associated with nano, and gave an example of one.
3Vladimir_Nesov14y
It's relatively insignificant, compared to other sources of existential risk. Overall, it's a vastly better investment than lipstick.
0Vladimir_Nesov14y
It's not generally valid, since this diverts resources from development of other potentially relevant tech that could help with establishing a colony once the time is right.
-1Roko14y
Speed of light fail
2JoshuaZ14y
No. We know that there are changes in a star before the supernova occurs. For example, in a Type II supernova, the radiation level initially increases linearly. For other supernova types the luminosity of the star does sometimes increase before the supernova event itself. Also, hours before a supernova, there may be a drastic increase in neutrino production. It is also likely that more detailed observation of stars will give us a better idea what sort of more subtle signs show up prior to supernovae.
0Roko14y
Think about it. You observe changes in a star 8 light-hours away from earth, and radio your observations back. What speed do the radio waves travel at? c. What speed does the light bearing the original observation travel at? c. What speed does the supernova blast travel at? also c. Neutrinos travel so close to c it makes no difference.
0[anonymous]14y
From the parent post: The notification and the blast travel at c, but the blast is hours behind the notification.
0JoshuaZ14y
If observed changes to a star happen well before the supernova event itself then the fact that everything is happening at c doesn't matter. Say for example that the neutrino flux increase happens 24 hours before hand. That means we have a 24 hour warning before the supernova event. Similarly, if we see an increase in luminosity before the supernova we still get advance warning. What matters is that there is a delay between when stars show signs of supernovaing and when they actually supernova.
3Vladimir_Nesov14y
The point is that being closer to the star when that happens doesn't provide you with more forewarning than if you look at it from home.
8JoshuaZ14y
I don't think anyone is advocating that we send actual probes to Betelgeuse or IK Pegasi. I'm confused why one would think that would even be on the table. Even if we sent a probe today at a tenth of the speed of light (well beyond our current capabilities) it will still take around 1500 years to get to IK Pegasi. I don't know why one would even think that would be at all in the useful category. What is helpful is having more space based observation equipment in our solar system. The more we put into space the less of a problem we have with atmospheric interference, artificial radio sources, and general light pollution. To use one specific example that would help a lot, if we had a series of optical telescopes that were spread out around the solar system we could use parallax measurements to get a better idea how far away Betelgeuse is. For a variety of reasons there's a lot of uncertainty about how far away it is with 330 light years as a lower estimate and around 700 as an upper estimate although it seems like around 640 is where things seem to be settling down at. Given the inverse square law for radiation, this matters for a supernova concern. A difference of 300 light years corresponds to about a factor of 4 in the radiation strength. Overall, most of the interesting, practical investigation and reduction of astronomical existential risks can be done right here in our home system.
4Cyan14y
So the benefit of space-based observation is signal amplification rather than signal speed.
3JoshuaZ14y
In a nutshell yes. And the more signal amplification we get the quicker we can detect problems before it is too late.

I believe it's vastly more efficient to focus on FAI (or WBE institutions) and protected Earth-based shelters than space colonization.

It's unrealistically difficult to create in the near future a self-sustaining (and quarantined!) colony that can seed civilization after a disaster that takes down even Earth-based shelters. Development of more efficient space transit doesn't even seem an important subproblem of that. By the time it's done, with however much effort one can realistically expect, only a small fraction of remaining non-uFAI risk will remain unrealized, and Earth-based shelters could be made more resilient in the meantime, faster and cheaper.

2Roko14y
So, this claim depends upon two subclaims. (1) The probability distribution of time-to-colony (2) The probability distribution of time-to-non-AI-risk Can you post your probability distributions for these two events in separate comments, so that I can agree or disagree with each separately?
2xamdam14y
Additionally I think that a rational person should have some survivalist skills in their arsenal to improve their/family/community chances in a major local or "small" earth-scale disruption. I think building earth-based shelters is a good idea in general, but will run into huge psychological walls because there will not be enough shelter for everybody. One advantage of the space strategy is skirting these issues, since space programs are not ostensibly survivalist-oriented.

I know you touched on this, but: since the beginning, the space program has existed due to make-work deals. To get the original legislation approved, they had to buy off legislators in various districts. (Why do you think the major centers are in Texas and Florida, two of the states mentioned?) To this day, the problem persists in that NASA can't switch to metric because of the numerous English-oriented workshops scattered across the country that they've locked themselves into buying from

But, to paraphrase a point EY made a while back: yes, it sucks tha... (read more)

And I've never understood this mentality. I don't feel entitled to perpetual demand for the kind of labor my employer provides, and I'd feel completely rotten about encouraging such waste just so I can keep exactly the same job. Where do people come up with this worldview?

Go back a generation and the concept of life-long careers was much more common. I think it's that social expectations for Boomers and earlier was that they would have a particular career for life, and many from those generations feel affronted at the thought of having to give up on their existing career. Effectively they feel they've suffered a breach of the social contract.

1SilasBarta14y
But aren't the Boomers at the end of their careers now? It seems it would have to be a problem with a later cohort for this to be a major issue now.
3James_K14y
The ones in politics aren't at the end of their careers, that means that legislatures as a body will be more likely to consider making people change jobs to be unthinkable than the average person. You are right though, over the next decade or so, this hypothesis would predict that demand for job security will fall over the next 10-20 years as the Boomers retire.
3Roko14y
You and I understand the principles of economic efficiency and the invisible hand. Ordinary people don't even not understand it. It comes from a different mental universe than their thoughts.
0JoshuaZ14y
The first time I read this I thought "right on!" but then rereading it I'm not actually sure what it means. Can you expand on what you mean?
9Roko14y
Suppose you take someone who doesn't know math, never has. To them, "million", "billion" and "trillion" mean "many". GDP is as meaningless to them as RFETR is to you, and to them economics means when rich greedy corporate people lie to them and take their jobs away. They are also heavily biased without realizing it, and without even realizing that people can be biased without realizing it (they think that all untruths are either lies or mistakes). They don't know what falsification or the scientific method is. Science and engineering are indistinguishable from magic to them. Then you take this person, and you try to explain "efficient allocation of capitol" to them. You may as well try to explain what a frequent flyer club is to a cave-man. The words simply wouldn't generate concepts for him to misunderstand.
5JoshuaZ14y
Ok. I thought something approximately like that but wasn't sure if this was due to an illusion of transparency. Spending time on LW may just be making me too paranoid about that.
2Jonathan_Graehl14y
I thought Florida was closer to the equator than most of the US, which decreases the energy needed to achieve orbit. I've often wondered if this is significant; if so, then why can't we launch from some friendly equatorial country?
3nerzhin14y
The Europeans do.
1Alexandros14y
French Guyana is not a European-friendly equatorial country, it's an overseas region of France and therefore, the EU.
0Roko14y
Because Europe is better than America. ;-0
2JoshuaZ14y
That's the correct reason for Florida. As I understand it, no equatorial country was considered stable and friendly enough to put the infrastructure there. And the US would have to then regularly transport a lot of equipment and personnel there. In contrast, putting mission control in Texas really was about politics. In particular, LBJ was from Texas. And while he was actually a fan of the space program (in some ways more so than Kennedy), he still wanted his home state to get something out of it.

Come to think of it, we don't really need sustainable civilization-seeding space colonies or shelters to protect against global non-uFAI disasters, we only need matter-quarantined FAI research institutions (in space or Earth-based shelters) that can last long enough to complete the development of FAI.

[-]Roko14y30

And whilst we're on cheap but high-sanity ways to get stuff to orbit, Brian Wang's Nuclear Space Gun comes out on top.

Need 100,000 tons of aluminized mylar mirror or a CFC factory to go terraform mars? Easy. Just take one ageing 10MT nuke, a hole in the ground and a sprinkling of mad scientist. Total cost for the launch itself would be a small fraction of the NASA budget as far as I can see. The cost-to-orbit per kilogram would be rock-bottom.

[-][anonymous]14y30

Doesn't this post violate the "no politics" rule?

8Nic_Smith14y
It has always been my impression that there is a "no politics" guideline around here, not a rule. Rightfully so, as it's easy to generate irrational talk about politics which would quickly overwhelm just about anything else and ruin everyone's day. However, Roko brings up something that deserves serious discussion since there's a lot of interest in existential risk here, but a good historical analog seems like it would difficult to find and might obfuscate more than enlighten (historical colonies are more controversial than space programs).
4Roko14y
I didn't say it was a left-wing or a right wing rocket!

A rocket that even has a wing-configuration handedness is pretty much screwed anyway ...

2[anonymous]14y
Look. I see that previous posts tagged "politics" have been basically anti-political, or a plea for rationality. It's all rather abstract. This post is not like that. The business with "make-work" is a partisan poke. (One I agree with, but never mind.)
6Roko14y
Which political party or faction supports government waste, pork-barrel money and make-work jobs at companies that have entrenched special interests? Is it anti-right because the right wing likes big business, or anti-left because it's big government? Seems like both to me...
2Aurini14y
Seconded. This post is "Anti-current party in power" which happens to be Democrat, but even a cursory amount of research would provide examples of Bush-era policies benefiting local individuals at the cost of technology- and the population in general. This example just happens to be more relevant to our concerns - existential threats and all.
7LucasSloan14y
Clinton- Bush Sr.- Reagan- Carter- Ford- Nixon- Johnson- etc.
0timtyler14y
See: http://lesswrong.com/tag/politics/
[-]Roko14y10

Lastly, I should mention Asteroid Mining. Consider the asteroid Eros:

In the 2,900 cubic kms of Eros, there is more aluminium, gold, silver, zinc and other base and precious metals than have ever been excavated in history or indeed, could ever be excavated from the upper layers of the Earth's crust.

You suddenly begin to see that entrepreneurs like Elon Musk could be the force that pushes us into a space economy.

Brian Wang thinks that there is $100 trillion (10^14) worth of platinum and gold alone there. Of course the price would begin to fall once you had made your first few hundred billion.

7Vladimir_M14y
Are there actually any materials on Earth that are so rare and precious (and perhaps in danger of running out in the foreseeable future) that it would make sense to mine them from space? By the way, the claim about aluminum sounds highly implausible to me. Aluminum accounts for about 8% of the Earth's crust by weight, and even if most of it is difficult to access, I would expect that more than the amount present on Eros would be extractable with methods much easier than any conceivable sort of asteroid mining.
4Roko14y
Rhodium is currently worth $88 million per 1000kg. I think that Platinum is an interesting possibility, as well as gold. 1000kg of platinum is currently worth $50 million. See this table of elements from wikipedia {Platinum, Rhodium, Gold, Iridium, Osmium, Palladium, Rhenium, Ruthenium} are in the $10,000+ per kg range, with {Platinum, Rhodium, Gold} being $30,000+ /kg If you consider the basket of metals in that table as a whole, there's obviously a lot of money to be made, and I bet that at least one of them will hold its price relatively well as you mine more of it. When Brian Wang Says Eros is worth $100 trillion, he's probably not far wrong.
3khafra14y
A cursory googling for "peak rare earth metals" yields a likely affirmative response. Hafnium, Iridium, neodymium, lathanum, cerium, and several others are both necessary for modern electronics and/or EVs, and rapidly diminishing. Barring societal collapse or a new technological revolution on the scale of transistors, we'll probably want to go out and get more within the century--and that's not even including the advantage of avoiding the deletorious effects of mining on Earth.
2Soki14y
Helium-3 could be mined from the moon. It would be a good fusion fuel, but it is rare on earth so it makes sense to get it from space.
3Vladimir_M14y
Now that's interesting! I didn't know that the prospects for helium-3 fusion are allegedly that good. Still, given the previous history of controlled fusion research, I'm inclined to be skeptical. Do you know of any critical references about the present 3He fusion research? All the references I've seen from a casual googling appear to be pretty optmistic about it.
3Soki14y
I have no reference, but as far as I understand, deuterium-tritium fusion is easier to achieve than deuterium-helium-3. But deuterium-helium-3 seems cleaner and the energy produced is easier to harvest. So I think that the first energy producing fusion reactor would be a deuterium-tritium one, and deuterium-helium-3 would come later.
3JoshuaZ14y
The primary reason that D-T is considered to be more easily viable than others is that it has the best numbers under the Lawson criterion. This is also true under the Triple Product test. While Wikipedia gives a good summary I can't find a better reference that is online (The Wikipedia article gives references including Lawson's original paper but I can't find any of them online). The real advantage of He3 Deuterium fusion is that it is aneutronic, that is it doesn't produce any neutrons. This means that there's much less nasty radiation that will harm the containment vessel and other parts and that much less of the energy will be in difficult to capture forms. This is especially important for magnetic confinement since neutrons lack of charge makes them not confined by electromagnetic fields. This is a non-technical article that discusses a lot of the basic issues including the distinction between fusion types, although they don't go through the level of detail of actually using Lawson's equation.
[-]Roko14y10

Note that the feasibility of all these proposals are relative to sanity: the NASA budget is $20bn, and Quicklaunch has a viable system to launch bulk materials out of a space gun for $250/kg. So 1 kiloton costs just 250 million, or 1% of the NASA budget. The space gun is dominated in cost by fixed costs of the gun, and scales up well (more volume per unit surface area of the projectile helps a space gun, because it reduces drag and drag heating, as well as the usual scale economies for a larger rocket) so if you really wanted to build a 10,000 square kilometer orbital mirror to terraform Mars, you could probably do it for less than 50% of one years' NASA budget.

Carl and Robin seconded.

Experiments like biosphere 2 are orders of magnitude more efficient than space travel as ways to protect mankind.