"History is always for someone and for some purpose"
Maybe we need something comparable.
Good post. Start of a series or topical one-off? If the former, would like to read around and get my head in gear.
There are things in which we are pretty sure. What evidence is provided by an unsupported suggestion that a particular doomsday scenario is possible? We can't bother to investigate every crazy doomsday scenario suggested, so something must establish the priorities. How much value does investigation triggered by an unsupported a-priory crazy idea about a particular form of existential risk, add to general investigation that is interested in considering any possible scenario? High public interest might warrant a public relations action, but why should it in itself require serious study?
"Good post." Ben Jones is wrong. This is a very, very excellent post. A candidate for Best. Post. Ever. from overcomingbias. Your final conclusion was unanticipated by me as a reader, but compelling.
I think I'm probably not the only one who has felt ambivalent about whether you're a talented existential risk problem solver (or a protean one), or the pet of talented existential risk problem solvers. This post makes a great case for putting you in the first category.
Ben Jones: One-off.
Vladimir, that's a good question, and while it doesn't have a simple answer, I'd say that both LA-602 and the RHIC issue were over the threshold that should trigger a careful analysis "just to be sure". Not prohibition, analysis. Alas, in our society, that analysis generally gets used by technophobes as a stalking-horse for prohibition, which makes people reluctant to analyze unless the danger is clear without analysis.
An alternative and perhaps better (shorter term, even, not 50 years) solution would be to make the technophobia go away - to regard it, in fact, as an existential risk in itself.
Now there's a risk analysis that could be done, today. The phobes would be helpless. What could they ban?
Eliezer, it might seem to you, as a non-physicist, that RHIC idea was over the threshold, but for physicists it likely doesn't look this way. They are sure enough to place the suggestion in the same bin with Apocalypse-2012. Analysis is implicit in the rational evidence composing their expertise, and as you noted, non-physicist must trust in the rationality of those who confidently refute the argument as nonsensical, but it is the rational estimate of the same people that determines whether to perform a dedicated analysis.
What about BSE? The Government said there was no risk, no problem, there turned out to be a problem, there was huge damage to the British beef farming industry, and certain people may have got nvCJD. I was a student in the high risk period, the 1980s, and I ate a lot of sausage, and I understand there is a very long possible incubation period.
It works both ways. In the case of BSE, such analysis as there was, was wrong; the risks turned out Not to be justified, and the losses were appalling.
So people like me, who cannot independently analyse all the papers, have even less trust in Government and those who advise on such things.
Abigail: seeing as we haven't seen hide nor hair of BSE for years now, and seeing as there is no obvious ramp-up in vCJD (which you'd expect from an epidemic, even a long incubation epidemic, since some people get ill faster) then I think the justified conclusion on BSE is more like "it doesn't seem to jump the species gap regardless of exposure except in incredibly rare susceptible individuals". Appalling losses? Sure, financial losses, from the panic.
Eliezer,
You point to a problem: "You can't admit a single particle of uncertain danger if you want your science's funding to survive. These days you are not allowed to end by saying, "There remains the distinct possibility..." Because there is no debate you can have about tradeoffs between scientific progress and risk. If you get to the point where you're having a debate about tradeoffs, you've lost the debate. That's how the world stands, nowadays."
As a solution, you propose that "where human-caused uncertain existential dangers are concerned, the only way to get a real, serious, rational, fair, evenhanded assessment of the risks, in our modern environment,
Is if the whole project is classified, the paper is written for scientists without translation, and the public won't get to see the report for another fifty years."
Wouldn't it just be easier to convince the public to accept a certain amount of risk, to accept debates about trade-offs? What you propose would require convincing that same public to give the government a blank check to fund secret projects that are being kept secret precisely because they present some existential threat. That might work for military projects, since the public could be convinced that the secrecy is necessary to prevent another existential threat (e.g., commies).
It just seems easier to modify public sentiment so that they accept serious discussions of risk. Otherwise, you have to convince them to trust scientists to accurately evaluate those risks in utter secrecy, which scientists will be funded only if they find that the risks are acceptable.
Anyways, I'm unconvinced that secrecy was the cause for the difference in rhetorical style between LA-602 and the RHIC review. What seems more plausible to me is this: Teller et al. could afford to mention that risks remained because they figured that a military project like theirs would get funded anyways. The authors of the RHIC Review had no such assurance.
An alternative and perhaps better (shorter term, even, not 50 years) solution would be to make the technophobia go away - to regard it, in fact, as an existential risk in itself.
How?
Wouldn't it just be easier to convince the public to accept a certain amount of risk, to accept debates about trade-offs?
How?
Keeping secrets is a known technology. Overcoming widespread biases is the reason we are here. If you have a way to sway the public on these issues, please, share.
@Vladimir: We can't bother to investigate every crazy doomsday scenario suggested
This is a strawman; nobody is suggesting investigating "every crazy doomsday scenario suggested". A strangelet catastrophe is qualitatively possible according to accepted physical theories, and was proposed by a practicing physicist; it's only after doing quantitative calculations that they can be dismissed as a threat. The point is that such important quantitative calculations need to be produced by less biased processes.
Wilczek was asked to serve on the committee "to pay the wages of his sin, since he's the one that started all this with his letter."
Moral: if you're a practicing scientist, don't admit the possibility of risk, or you will be punished. (No, this isn't something I've drawn from this case study alone; this is also evident from other case studies, NASA being the most egregious.)
You can't admit a single particle of uncertain danger if you want your science's funding to survive. ... So no one can do serious analysis of existential risks anymore, because just by asking the question, you're threatening the funding of your whole field.
And by writing this blog post, you're doing what to the LHC...?
Eliezer has given a number of instances of relevant things that have changed in the last 65 years, but I wonder if the physicists may have themselves changed as well. Certainly the selection criterion used to bring people into physics, to bring them to academic posts, and to train them have changed substantially, though less than in other sciences. I don't know how much relevance this might have on the methods they used recently vs. in 1944.
It's entirely possible that there are classified analyses of the RHIC/LHC risks which won't be released for decades.
What public discussion was occurring in the 40s regarding the risks of atmospheric ignition?
So, you are analogizing from a secret discussion about a secret project, to suggest the existence of secret discussions about a public project?
Eliezer, the cosmic ray argument doesn't work against black holes or strangelets; cosmic ray collision products have a large momentum relative to Earth whereas some of the particle accelerator collision products would end up going slowly enough to cause damage. There are, however, other strong arguments against both the black hole and strangelet scenarios, for which see http://lhc2008.web.cern.ch/LHC2008/documents/LSAG.pdf (note that the author confirms the part about the cosmic ray argument not working). In the black hole case, for there to be a disaster, it would have to be true that 1) LHC unexpectedly creates micro black holes, 2) Hawking radiation unexpectedly doesn't work, 3) it eats the Earth with unexpected speed, 4) especially given that micro black holes don't seem to have eaten neutron stars.
First, the link is broken.
We don't need to use the Earth for the cosmic ray argument. In fact, it's best if we don't. What should we use? White dwarf and neutron stars. These have a density that makes runaway processes exponentially easier (and I mean that 'exponentially' literally, and it also applies figuratively). They have not collapsed into black holes or strangelet soup either.
Also, we aren't anthropically dependent on them the way we are on Earth itself.
The wayback machine still has it. Yes, it talks about white dwarf and neutron stars; that's what "other strong arguments" referred to. There's more discussion in the comments here.
I wrote, "Wouldn't it just be easier to convince the public to accept a certain amount of risk, to accept debates about trade-offs?"
Zubon replied:
How?
Keeping secrets is a known technology. Overcoming widespread biases is the reason we are here. If you have a way to sway the public on these issues, please, share.
"Keeping secrets" is a vague description of Eliezer's proposal. "Keeping secrets" might be known technology, but so is "convincing the public to accept risks." (E.g., they accept automobile fatality rates.) Which of these "technologies" would be easier to deploy in this case? That depends on the particular secrets to be kept and the particular risks to be accepted.
Since Eliezer talked about keeping projects "classified", I assume that he's talking about government-funded research. So, as I read him, he wants the government to fund basic, nonmilitary research that carries existential risks, but he wants the projects and the reports on the existential risks to be kept classified.
In a democracy, that means that the public, or their elected representatives, need to be convinced to spend their tax dollars on research, even while they know that they will not be told of the risks, or even of the nature of the specific research projects being funded. That is routine for military research, but there the public believes that the secrecy is protecting them from a greater existential threat. Eliezer is talking about basic research that does not obviously protect us from an existential threat.
The point is really this: To convince the public to fund research of this nature, you will need to convince them to accept risks anyways, since they need to vote for all this funding to go into some black box marked "Research that poses a potential existential threat, so you can't know about it." So, Eliezer's plan already requires convincing the public to accept risks. Then, on top of that, he needs to keep the secrets. That's why it seems to me that his plan can only be harder than mine, which just requires convincing them to accept risks, without the need for the secrecy.
""Keeping secrets" is a vague description of Eliezer's proposal. "Keeping secrets" might be known technology, but so is "convincing the public to accept risks." (E.g., they accept automobile fatality rates.)"
Tyrrell, this is an impressive marriage of bad arguments. You deserve some type of prize for this level of Caledonian-bait.
Hopefully: While I agree that Tyrell's arguments are poor, it seems to me that he is being naive, not trolling. He may also simply be joking. In any event, initiating hostility as you seem to be doing in your post degrades a thread. Please don't do it.
...he wants the government to fund basic, nonmilitary research that carries existential risks, but he wants the projects and the reports on the existential risks to be kept classified.
What makes you think they don't?
1) Existential risk is real and warrants government-funded research. 2) The results, if useful, would not be a sexed-up dodgy dossier, but a frank, measured appraisal of ER. As such, they would acknowledge a nonzero risk to humanity from sundry directions. 3) As such, they would be blown out of proportion by media reporting, as per Eliezer's analysis. This isn't certain, but it's highly likely. "Gov't Report: The End Is Nigh For Humankind!" 4) Natural conclusion - keep it classified, at least for the time being.
Wouldn't it just be easier to convince the public to accept a certain amount of risk, to accept debates about trade-offs?
Almost certainly not. As the piece above says, public & media reaction tends to be 100% positive or 100% negative. If you think you can talk the world out of this, there's a Nobel Prize in it for you.
Sorry, I just remembered: Titanic. For sure everyone that embarked on that ship was more worried about other things like: financial troubles, health problems, love affairs, etc... after all, the ship was built to be unsinkable.
Regarding the LHC, the mentioned risks are black holes and strangelets. What about the things no one has thought about? The truth is, no one knows what might come out when we open the pandora box.
The truth is, no one knows what might come out when we open the pandora box.
That's true of anything new. Someone speaking a sequence of nonsense syllables might accidentally cast a magic spell - but we don't concern ourselves with that possibility, because we have absolutely no way to anticipate or protect against it. 'Spells' are not part of our working model of the universe.
There are known dangers and unknown dangers, anticipatable dangers and unanticipatable dangers. If the peril is both unknown and unanticipatable, we have nothing to be concerned about - if it's known, we can react appropriately. The problem is the unknown, anticipatable risks.
What makes you think they don't?
I acknowledge that they probably do so with some nonzero number of projects. But I take Eliezer to be advocating that it happen with all projects that carry existential risk. And that's not happening; otherwise Eliezer wouldn't have had the example of the RHIC to use in this post. Now, perhaps, notwithstanding the RHIC, the government already is classifying nearly all basic science research that carries an existential threat, but I doubt it. Do you argue that the government is doing that? Certainly, if it's already happening, then I'm wrong in thinking that that would be prohibitively difficult in a democracy.
1) Existential risk is real and warrants government-funded research.
Agreed.
2) The results, if useful, would not be a sexed-up dodgy dossier, but a frank, measured appraisal of ER. As such, they would acknowledge a nonzero risk to humanity from sundry directions.
Agreed.
3) As such, they would be blown out of proportion by media reporting, as per Eliezer's analysis. This isn't certain, but it's highly likely. "Gov't Report: The End Is Nigh For Humankind!"
Some media would react that way. And then some media would probably counter-react by exaggerating whatever problem warranted the research in the first place. Consider, e.g., the torture of detainees in the "war on terror". Some trumpet the threat of a terrorist nuke destroying a city if we don't use torture to prevent it. Others trumpet the threat of our torture creating recruits for the terrorists, resulting in higher odds of a nuke destroying a city.
4) Natural conclusion - keep it classified, at least for the time being.
It's far from obvious to me that, in the example of torture, the best solution is to keep the practice classified. Obviously the practitioners would prefer to keep it that way. They would prefer that their own judgment settled the matter. I'm inclined to think that, while their judgment is highly relevant, sunlight would help keep them honest.
Analogously, I think that sunlight improves the practice of science. Not in every case, of course. But in general I think that the "open source" nature of science is a very positive aspect. It is a good ideal to have scientists expecting that their work will be appraised by independent judges. I realize that I'm going along with the conventional wisdom here, but I'm still a number of inferential steps away from seeing that it's obviously wrong.
As the piece above says, public & media reaction tends to be 100% positive or 100% negative. If you think you can talk the world out of this, there's a Nobel Prize in it for you.
Can you elaborate on your claim that "public & media reaction tends to be 100% positive or 100% negative"? Do you mean that, on most risky projects, the entire public, and all the media, react pro or con in total unanimity? Or do you mean that, on most issues of risk, each individual and each media outlet either supports or opposes the project 100%? Or do you mean something else? I'll respond after your clarification.
I should emphasize that I'm not arguing for using direct democracy to determine what projects should be funded. The process to allocate funding should be structured so that the opinions of the relevant experts are given great weight, even if the majority of the population disagrees. What I'm skeptical about is the need for secrecy.
Note that Eliezer does not give a recommendation for solving this problem. Rather, he gives advice for how to get an unbiased risk assessment. But that is only part of the problem.
Unfortunately, his solution does not solve the social problem of deciding what existential risks are worth taking. His way of getting an unbiased assessment requires not informing society of the result! The only way I could see to use it would be for society to fully delegate decision making on these matters to a committee which decides in secret. That would be hard to justify for projects like particle colliders.
And even then, Eliezer's requirements are necessary but not sufficient. We are lucky that the LA-602 authors were not committed to making sure the nuclear bomb tests proceeded. That is not necessarily the case with particle theorists who have worked for many years to get these accelerators approved, funded and built. Analysis by partisans, even if by scientists and for scientists, is likely to be biased.
The process to allocate funding should be structured so that the opinions of the relevant experts are given great weight, even if the majority of the population disagrees. What I'm skeptical about is the need for secrecy.
As well you should be. What difference does it actually make if the media scaremonger a project? A group of physicists are still perfectly capable of getting together and conducting a quality analysis.
A government report that is going to be displayed to government officials and the public at large, written by people beholden to public opinion and the whims of government, will be written with those audiences in mind. Producing a quality government report therefore requires that it be kept secret and therefore not intended for non-experts. Emphasis on government. Producing a quality report doesn't require secrecy, as long as the people performing the analysis don't have to worry about being trodden beneath the blind elephant.
A government report that is going to be displayed to government officials and the public at large, written by people beholden to public opinion and the whims of government, will be written with those audiences in mind.
Car mechanics and dentists are often paid both to tell us what problems need fixing and to fix them. That's a moral hazard that always exists when the expert who is asked to determine whether a procedure is advisable is the same as the expert who will be paid to perform the procedure.
There are several ways to address this problem. Is it so clear that having the expert determine the advisability in secret is the best way, much less a required way, in this case?
Tyrrell: OK, you seem to be serious, smart, thoughtful, and genuinely trying hard. For these reasons I think that you will eventually make an important contribution to the discussion of these subjects. However, it seems to me that you lack a great deal of background experience or familiarity with real public debate as it exists today in the US. I would recommend the book "The Myth of the Rational Voter" as a very partial correction to what seem to be seriously mistaken beliefs about how deliberation takes place and even about what most people think argument is or should be.
michael vassar, I'm familiar with that book. I haven't read it, but I listened to an hour-long interview with the author here: http://bloggingheads.tv/diavlogs/261
I think that the author made many good points there, and I take his theses seriously. However, I don't think that secrecy is usually the best solution to the problems he points out. I favor structuring the institutions of power so that "cooler heads prevail", rather than trying to keep "warmer heads" ignorant. Decision makers should ultimately be answerable to the people, but various procedural safeguards such as indirect representation (e.g., the electoral college), checks-and-balances, requiring super-majorities, and so forth, can help ameliorate the impulsiveness or wrong-headedness of the masses.
And I think that by-and-large, people understand the need for safeguards like these. Many might not like some of the specific safeguards we use. The electoral college certainly has come in for a lot of criticism. But most people understand, to some degree, the frailties of human nature that make safeguards of some kind necessary. Enough of us are in the minority on some position so that most of us don't want the whim of the majority to be always instantly satisfied. In some liberals, this manifests as a desire that the courts overturn laws passed by a majority of the legislature. In some conservatives, this manifests as support for the theory of the unitary executive. But the underlying problem seems to be recognized across the spectrum.
So, in effect, I'm arguing that the people can be counted on to vote away their right to completely control scientific research. Indeed, they have already done this by implementing the kinds of procedural safeguards I mentioned above.
I realize that that might appear to conflict with my skepticism that they would vote away their right to know about the existential risks of the research they fund. But I think that there's a big difference. In the former case, the people are saying, "we shouldn't be able to influence research unless we care enough to work really, really hard to do so." In the latter case, you're asking them to say, "we shouldn't even know about the research so that, no matter how much we would care if we were to know, still we can do nothing about it." It seems unrealistic to me to expect that, except in special cases like military research.
So I don't think that secrecy is necessary in general to protect science from public ignorance. Other, better, means are available. Now, in this post I've emphasized an "institutional safeguards" argument, because I think that that most directly addresses the issues raised by the book you mentioned. But I still maintain my original argument, which is that it's easier to convince the public to fund risky research than it is to convince them to fund risky research and to vote that it be kept secret from them. This seemed to be the argument of mine that engendered the most skepticism, but I don't yet see what's causing the incredulity, so I don't know what would make that argument seem more plausible to the doubters or, alternatively, why I should abandon it.
If the secret report comes back "acceptable risk" I suppose it just gets dumped into the warehouse from Raiders of the Lost Ark, but what if it doesn't?
Perhaps such a report was produced during the construction of the SSC?
What if the report is about something not under monolithic control?
In the latter case, you're asking them to say, "we shouldn't even know about the research so that, no matter how much we would care if we were to know, still we can do nothing about it."
On this site, there seems to be a broad approval of the institution of an aristocracy of rationality. Having the best rule is always a good idea, but aristocracies tend to decay in predictable ways - and I rather doubt the hidden assumption that we can evaluate ourselves as being worthy to take the part of rulers. Overestimating one's own rationality is a cognitive bias that is virtually omnipresent.
Note that Eliezer does not give a recommendation for solving this problem. Rather, he gives advice for how to get an unbiased risk assessment. But that is only part of the problem.
I would just like to endorse Hal Finney's remark here. I don't know how to solve the general problem here. I am just suggesting, based on the LA-602 and RHIC Review, that having a classified analysis is better ceteris paribus than not having one.
As soon as they arrived and were briefed on the goal of the project, the Los Alamos scientists began to have grave doubts about the morality of the project, which doubts never left them. Since the defeat of the Nazis was the consideration that caused many of the scientists to stay on despite their grave doubts, after the defeat of the Nazis, the grave doubts probably became harder to ignore, and the date on the document Eliezer links to is August 14, 1946.
In contrast, the professional high-energy physicists involved in the debate about the RHIC had no more doubts about the morality of their enterprise than any other highly prestigious professionals in our society (namely, few chronic doubts indeed).
(Now watch: some ideologue will reply that guilt is never healthy and has never contributed to an improved outcome in any endeavor.)
This is one reason I'm skeptical of the consensus on global warming and what should be done about it. Most of the articles on global warming I read smell much more like RHIC than like LA-602.
If the investigators have a strong tendency towards objectivity, then freeing them from external obligations and concerns should improve the quality of their conclusions.
But that's a very big if - and when the investigators have biases that need external cross-checking and correction, as is the case with most people most of the time, eliminating outside influence is lethal.
Tyrell: people are especially confused about epistemology. They think that it is inherently unethical to acknowledge one's ignorance. I don't think that any beliefs about how government should be structured are common among ordinary people other than an affirmation of however they are told that things are structured. In practice, it is an old observation that it is easier to ask forgiveness than permission. As an elected leader it is even easier to get the people to accept secrecy without ever challenging it than to ask forgiveness. Of course, the people want to elect the available leader who seems most like one of them whenever they can get away with it, and such leaders are, as the last 7 years show, utterly disastrous.
They think that it is inherently unethical to acknowledge one's ignorance.
I really don't see this; it's taken as a sign of weakness (which is terrible enough), but who actually thinks it's wrong, even in the same twisted way as, oh, homosexuality, or disrespecting authority? (Operationally - who would condemn it using specifically moral language?) Then again, maybe there isn't much distinction between these two perceptions.
By the way, does anybody have information about soviet h-bomb research? Is there any information about russian physicists making similar calculations?
The Soviet archives are still very closed, and what's open is often not known in the West, so we can't make any strong argument from silence.
In this specific case, any Soviet examination or lack of examination is weaker evidence than it looks. The Soviets learned of the possibility & feasibility of h-bombs through spies, so they could have had access to this report or their spies' dismissal of the possibility (based on the report) or they could reason that the Americans who were sufficiently insightful to invent h-bombs (where they couldn't) were also insightful enough to check the safety issues.
Finally, given the general environmental fallout from all sorts of Soviet programs, it's not clear they ever cared about such issues very much. (For example, the Tsar Bomba was the result of a crash program for a political demonstration where the teams knew their designs and calculations were rushed & unreliable, and hence the bomb could have fizzled or been another Bikini Atoll or something. It was still detonated.)
Tyrell, Caplan's critique of democracy focuses on our own system. It is based on Wittman's critique of traditional public choice. It used to be said that because we do not have direct democracy we will have self-interested rent-seeking politicians and lobbyists, insufficient competition and so on resulting in politicians not following the "will of the people". Wittman showed those objections to be insubstantial and in turn Caplan showed that it proves the deficiency of our democracy since the "will of the people" is very wrong about many important things.
An objection I found to tiny-black-hole-swallows-the-universe is here. It's from a flippant CS professor rather than a physicist but it makes some sense.
CERN on its LHC:
Studies into the safety of high-energy collisions inside particle accelerators have been conducted in both Europe and the United States by physicists who are not themselves involved in experiments at the LHC... CERN has mandated a group of particle physicists, also not involved in the LHC experiments, to monitor the latest speculations about LHC collisions
Things that CERN is doing right:
If it is true that we live in a universe in which every possible future exists, then world destroying events don't matter at all.
Yes they do!
I don't want to live in one of the worlds that are going to die, so I'm going to do my best to make this world not be one of those (though I know that's not quite how it works ).
Knowing that the world is deterministic does not change anything from our perspective, or relieve us of moral responsibility. It all ads up to normality!
"Keeping secrets" might be known technology, but so is "convincing the public to accept risks." (E.g., they accept automobile fatality rates.)
Has public opinion on auto safety changed over the years? We certainly don't require footmen to wave flags in front of cars anymore, but I doubt that ever reflected general concern.
What are good examples of big attempts to change public beliefs about risks or actions that might reflect beliefs about risks?
US government campaigns about drunk driving and smoking spring to mind. My impression is that they affected actions but not beliefs about risk. I'm not sure whether they linked belief to action better, or whether they changed action for other reasons (eg, coolness).
Has public opinion on auto safety changed over the years? We certainly don't require footmen to wave flags in front of cars anymore, but I doubt that ever reflected general concern.
People allow doctors to use their best guess in prescribing medicine but don't allow them to retain and share outcome data for off label use of medicine so that their decisions can be evaluated collectively to generate information that might improve future use of medicine. The doctors have to essentially pretend inspired knowledge of what's medically right rather than admitting that they are making a best guess and searching for information with which to later revise that guess.
don't allow them to retain and share outcome data for off label use of medicine so that their decisions can be evaluated collectively
By whom? Who checks the data, who ensures there are no confounds, who evaluates and who concludes? Doctors already report unusual problems with medications, which is how we discover the drawbacks of drugs that somehow made it through the incredibly stringent gauntlet of drugmaker testing. cough
I think you're also grossly overestimating the concern doctors have for ensuring their patients are getting proper treatment. It seems to me that most doctors, like most patients, want to believe that what they're doing is effective, and don't want to doubt. So they don't, and take the choices they make for granted.
That looks to me more like condemnation of acting out of ignorance plus pretending ignorance isn't present unless it's confessed - which, OK, comes out to basically the same effect.
Is it just me or they completely ignored the following arguments in all reports?
1) Over-reliance on the effects of Hawking radiation, leaving a big hole in their reasoning if it turns out it doesn't exist. 2) Extremely high pressures near the center of the Earth might increase the accretion rate substantially. 3) Any products of cosmic ray interaction with existing stellar bodies are very likely to escape the body's gravitational influence before doing any damage because of their near-c speeds, which is not the case in the LHC.
I read all the reports. I feel a bit better knowing that different commissions analyzed the risks involved, and that at least one of them was supposedly independent from CERN. But the condescending tone of the reports worries me; it seems that these calculations were not given the appropriate attention, rather being little more than a nuisance in the authors' agendas.
Would you write someone in charge about your concerns, or you're betting on the "it's all good" side of things?
Jotaf, your concerns are specifically addressed in the recent LHC report.
In reverse order,
3) Bubbles of vacuum decay expanding at the speed of light are a problem in any reference frame, so they are well constrained by astronomical observations. Most cosmic rays are coming inwards, so many of the particles would have to escape through the planet, neutron star, etc. Anything with an electric or magnetic charge would be stopped (magnetic charges come on monopoles). This means everything but neutral black holes can be immediately bounded by astronomical evidence.
2) The conservative estimates are done with the density of neutron starts, iron at less than a TPa is nothing in comparison.
1) Section 4 gives a series of arguments, each assuming that the expected behavior that prevented disaster in the last argument is mysteriously absent and considering the situation again. Only the first assumed Hawking radiation will occur, the last assumes neutral black holes that never decay and bounds on their growth rate.
To be sure, this report contains no equations, but it reads to me not like condescension, but a summary of the results of people who have done the math.
It seems like the details should be in the references. Try following the citations, e.g. next to the assertion that someone has bounded accretion rates of neutral hawking-free black holes assuming various numbers of small dimensions, or that hotter collisions should be less likely to produce strangelets.
A very interesting analysis, though I hope your overstatement is for effect...
It is in fact an overstatement, though the tendencies you describe are surely strong. It is interesting that my field, climatology, is often accused of drumming up an existential threat to preserve our funding, which would be almost as bad of a moral failure as ignoring one. Of course, the situation is somewhat different as physicists advocate bizarre experiments while we are suggesting a bizarre experiment come to an end as soon as possible...
Thanks, I must have missed that in the myriad of information. I'll follow the references but your explanation is certainly more reassuring.
A new report (Steven B. Giddings and Michelangelo M. Mangano, Astrophysical implications of hypothetical stable TeV-scale black holes, arXiv:0806.3381 ) does a much better job at dealing with the black hole risk than the old "report" Eliezer rightly slammed. It doesn't rely on Hawking radiation (but has a pretty nice section showing why it is very likely) but instead calculates how well black holes can be captured by planets, white dwarves and neutron stars (based on AFAIK well-understood physics, besides the multidimensional gravity one has to assume in order to get the threat in the first place). The derivation does not assume that Eddington luminosity slows accretion and does a good job at examining how fast black holes can be slowed - it turns out that white dwarves and neutron stars are good at slowing them. This is used to show how dangerously fast planetary accretion rates are incompatible with the observed lifetime of white dwarves and neutron stars.
The best argument for Hawking radiation IMHO is that particle physics is time-reversible, so if there exist particle collisions producing black holes there ought to exist black holes decaying into particles.
Does anyone know of a solid proof that either 1) LHC won't create lab universes or 2) if so, it already happens all the time anyway? I expect that both 1) and 2) could be shown to be practically certain but I haven't seen it addressed anywhere.
Also, is time travel via traversable wormholes a possible risk? You'd expect it to already have happened, but maybe in a particle accelerator unlike in the wild time travelers would know the necessary coordinates or something. Again it seems quite improbable but it could have been addressed.
Nice work, but one nitpick.
The underestimate of the Castle Bravo yield was not due to a math error. As you noted in the next sentence, it was due to a failure to calculate possible fusion reactions involving 7Li (or, to be more precise, reactions with what results when 7Li is in a large flux of neutrons from the other thermonuclear reactions taking place). From an engineering point of view, if they had allowed a typical safety factor of 3 above their original yield estimate, they would have had a more appropriate exclusion zone.
In contrast, the initial belief that N might undergo a self-sustained thermonuclear reaction resulted from a major math error by Teller when examining that process. The histories indicate he was prone to that sort of problem. (For example, his belief in a particular design for the Super was based on flawed calculations, and he only came up with the Ulam-Teller design after being convinced of the error.) Seemed to go along with his creativity.
You will notice that very large safety factors were built into the LA-602 calculation, ones that went well beyond the known uncertainties of the poorly known reaction cross sections. Similar things were done in the LHC study, but the problem is really one of statistics: these reactions take place all the time in the atmosphere, but the flux is low compared to the LHC. That is why they have to look beyond the earth for their extreme test cases.
Confessions of the Father of the Neutron Bomb, Sam Cohen:
The one idea dearest to Teller’s heart was the H-bomb. He and a couple of his cronies applied themselves to devising various schemes on designing such a weapon. All of them turned out to be impractical and most of them unworkable. Which never slowed him down in the slightest for reasons we’ll never know nor will he...One day, Teller announced he would be giving a colloquium on his work. Since this was a pretty fascinating subject that held the potential for providing a bang a hundred or even a thousand times bigger than the A-bomb, he spoke to a pretty full house, that included me.
...The briefing was over, there were lots of questions that Teller handled with aplomb, knowing far more about the subject than most in the audience. Finally there came a question that had nothing to do with whether or not his [h-]bomb was feasible. Instead, it was whether it would destroy the world by causing uncontrollable thermonuclear reactions in the earth’s atmosphere that would cause it to burn up, plus you and me and everyone else. Teller was on his game, as he always was, and replied that he had estimated this terrible possibility and we were quite safe — by about a factor of ten. Now there aren’t too many people who rest comfortably with assurances that mankind’s fate, let alone their own, is on the safe side by a factor of ten, although there are millions of smokers, including myself, drug addicts, sex addicts, who take much greater chances than that with their lives, and they know it. Except that they know it won’t happen to them, which is why many soldiers, at least as stupid as they’re brave, get Congressional Medals of Honor.
Naturally, the next question was: “How accurate is the nuclear data you used in your calculation?” As only Edward Teller is capable of doing, he replied, with a smirk-smile going from ear to ear: “Well, it’s possible that the data might be off by a factor of ten.” Which way, he didn’t profess to know, but I suspect that much of the audience didn’t sleep too well that night. (If you’re beginning to worry that you might not be sleeping too well from now on, I have to tell you that as it turned out, after careful nuclear measurements and detailed calculations were made, we were safe by far more than a factor of ten. It simply couldn’t happen.)
Cohen then goes on to discuss in the next section how egregiously biased & unrealistic was the study RAND performed for the Air Force in order to justify the military utility of h-bombs and hence their development. After mentioning that the nuclear stockpile now reflects Oppenheimer's strategy of smaller warheads, he concludes:
There’s a profound lesson to be learned about the great H-bomb debate; that it was a farce. I’m sure (or am I?) that many scholars now understand how farcical it was, but you’d never know it from their writings. However, whether generals, admirals, congressmen, and Presidents have grasped this, I wonder. In fact, I don’t wonder too much, for as I observe the passing scene with arguments on the Stealth bomber, the MX missile, and all that nuclear business that is supposed to mean so much for our survival, I don’t think they’ve learned a thing. They still carry on in the same way, fighting over expensive nuclear weapon systems they don’t understand, to be used in a war we don’t know how to fight.
I found this paper which is interesting. But at the start he tells an interesting anecdote related to that paper:
The first occurred at Los Alamos during WWII when we were designing atomic bombs. Shortly before the first field test (you realize that no small scale experiment can be done -- either you have the critical mass or you do not), a man asked me to check some arithmetic he had done, and I agreed, thinking to fob it off on some subordinate. When I asked what it was, he said, "it is the probability that the test bomb will ignite the whole atmosphere," I decided I would check it myself! The next day when he came for the answers I remarked to him, "The arithmetic was apparently correct but I do not know about the formulas for the capture cross sections for oxygen and nitrogen -- after all, there could be no experiments at the needed energy levels." He replied, like a physicist talking to a mathematician, that he wanted me to check the arithmetic not the physics, and left. I said to myself, "what have you done, Hamming, you are involved in risking all life that is known in the Universe, and you do not know much of an essential part?" I was pacing up and down the corridor when a friend asked me what was bothering me. I told him. His reply was, "Never mind, Hamming, no one will ever blame you." Yes, we risked all the life we knew of in the known universe on some mathematics. Mathematics is not merely an idle art form, it is an essential part of our society.
LA-602: Ignition of the Atmosphere with Nuclear Bombs, a research report from the Manhattan Project, is to the best of my knowledge the first technical analysis ever conducted of an uncertain danger of a human-caused extinction catastrophe.
Previously, Teller and Konopinski had been assigned the task of disproving a crazy suggestion by Enrico Fermi that a fission chain reaction could ignite a thermonuclear reaction in deuterium - what we now know as an H-Bomb. Teller and Konopinski found that, contrary to their initial skepticism, the hydrogen bomb appeared possible.
Good for their rationality! Even though they started with the wrong conclusion on their bottom line, they were successfully forced away from it by arguments that could only support one answer.
Still, in retrospect, I think that the advice the future would give to the past, would be: Start by sitting down and saying, "We don't know if a hydrogen bomb is possible". Then list out the evidence and arguments; then at the end weigh it.
So the hydrogen bomb was possible. Teller then suggested that a hydrogen bomb might ignite a self-sustaining thermonuclear reaction in the nitrogen of Earth's atmosphere. This also appeared extremely unlikely at a first glance, but Teller and Konopinski and Marvin investigated, and wrote LA-602...
As I understand LA-602, the authors went through the math and concluded that there were several strong reasons to believe that nitrogen fusion could not be self-sustaining in the atmosphere: it would take huge energies to start the reaction at all; the reaction would lose radiation from its surface too fast to sustain the fusion temperature; and even if the fusion reaction did grow, the Compton effect would increase radiation losses with volume(?).
And we're still here; so the math, whatever it actually says, seems to have been right.
Note that the Manhattan scientists didn't always get their math right. The Castle Bravo nuclear test on March 1, 1954 produced 15 megatons instead of the expected 4-8 megatons due to an unconsidered additional nuclear reaction that took place in lithium-7. The resulting fallout contaminated fishing boats outside the declared danger zone; at least one person seems to have died.
But the LA-602 calculations were done with very conservative assumptions, and came out with plenty of safety margin. AFAICT (I am not a physicist) a Castle Bravo type oversight could not realistically have made the atmosphere ignite anyway, and if it did, it'd have gone right out, etc.
The last time I know of when a basic physical calculation with that much safety margin, and multiple angles of argument, turned out to be wrong anyway, was when Lord Kelvin showed from multiple angles of reasoning that the Earth could not possibly be so much as a hundred million years old.
LA-602 concludes:
Decades after LA-602, another paper would be written to analyze an uncertain danger of human-created existential risk: The Review of Speculative "Disaster Scenarios" at RHIC.
The RHIC Review was written in response to suggestions that the Relativistic Heavy Ion Collider might create micro black holes or strangelets.
A B.Sc. thesis by Shameer Shah of MIT, Perception of Risk: Disaster Scenarios at Brookhaven, chronicles the story behind the RHIC Review:
The RHIC flap began when Walter Wagner wrote to Scientific American, speculating that the Brookhaven collider might create a "mini black hole". A reply letter by Frank Wilczek of the Institute for Advanced Studies labeled the mini-black-hole scenario as impossible, but also introduced a new possibility, negatively charged strangelets, which would convert normal matter into more strange matter. Wilczek considered this possibility slightly more plausible.
Then the media picked up the story.
Shameer Shah interviewed (on Nov 22, 2002) Robert Jaffe, Director of MIT's Center for Theoretical Physics, a pioneer in the theory of strange matter, and primary author of the RHIC Review.
According to Jaffe, even before the investigative committe was convened, "No scientist who understood the physics thought that this experiment posed the slightest threat to anybody." Then why have the committee in the first place? "It was an attempt to take seriously the fears of science that they don't understand." Wilczek was asked to serve on the committee "to pay the wages of his sin, since he's the one that started all this with his letter."
Between LA-602 and the RHIC Review there is quite a difference of presentation.
I mean, just look at the names:
See a difference?
LA-602 began life as a classified report, written by scientists for scientists. You're assumed to be familiar with the meaning of terms like Bremsstrahlung, which I had to look up. LA-602 does not begin by asserting any conclusions; the report walks through the calculations - at several points clearly labeling theoretical extrapolations and unexplored possibilities as such - and finally concludes that radiation losses make self-sustaining nitrogen fusion impossible-according-to-the-math, even under the most conservative assumptions.
The RHIC Review presents a nontechnical summary of its conclusions in six pages at the start, relegating the math and physics to eighteen pages of appendices.
LA-602 concluded, "There remains the distinct possibility that some other less simple mode of burning may maintain itself in the atmosphere..."
The RHIC Review concludes: "Our conclusion is that the candidate mechanisms for catastrophic scenarios at RHIC are firmly excluded by existing empirical evidence, compelling theoretical arguments, or both. Accordingly, we see no reason to delay the commissioning of RHIC on their account."
It is not obvious to my inexpert eyes that the assumptions in the RHIC Review are any more firm than those in LA-602 - they both seem very firm - but the two papers arise from rather different causes.
To put it bluntly, LA-602 was written by people curiously investigating whether a hydrogen bomb could ignite the atmosphere, and the RHIC Review is a work of public relations.
Now, it does seem - so far as I can tell - that it's pretty damned unlikely for a particle accelerator far less powerful than random cosmic rays to destroy Earth and/or the Universe.
But I don't feel any more certain of that after reading the RHIC Review than before I read it. I am not a physicist; but if I was a physicist, and I read a predigested paper like the RHIC Review instead of doing the whole analysis myself from scratch, I would be fundamentally trusting the rationality of the paper's authors. Even if I checked the math, I would still be trusting that the equations I saw were the right equations to check. I would be trusting that someone sat down, and looked for unpleasant contrary arguments with an open mind, and really honestly didn't find anything.
When I contrast LA-602 to the RHIC Review, well...
Don't get me wrong: I don't feel the smallest particle of real fear about particle accelerators. The basic cosmic-ray argument seems pretty convincing. Nature seems to have arranged for the calculations in this case to have some pretty large error margins. I, myself, am not going to worry about risks we can actually calculate to be tiny, when there are incalculable large-looking existential risks to soak up my concern.
But there is something else that I do worry about: The primary stake on the table with things like RHIC, is that it is going to get scientists into the habit of treating existential risk as a public relations issue, where ignorant technophobes say the risk exists, and the job of scientists is to interface with the public and explain to them that it does not.
Everyone knew, before the RHIC report was written, what answer it was supposed to produce. That is a very grave matter. Analysis is what you get when physicists sit down together and say, "Let us be curious," and walk through all the arguments they can think of, recording them as they go, and finally weigh them up and reach a conclusion. If this does not happen, no analysis has taken place.
The general rule of thumb I sometimes use, is that - because the expected utility of thought arises from the utility of what is being reasoned about - a single error in analyzing an existential risk, even if it "doesn't seem like it ought to change the conclusion", is worth at least one human life.
The RHIC Review is not written to the standard of care that would be appropriate if, after the RHIC Review was written, some higher authority went through the paper; and if a single argument in it was wrong, anywhere, whether or not it changed the conclusion, a hostage got shot. That's how to think about analyzing existential risks. That way, for each and every element of the analysis, you can find it in yourself to be a little uncertain about that element, even if it doesn't seem "like it could possibly change the conclusion"; uncertainty invokes curiosity.
The RHIC Review was produced by authors who were already sure that the RHIC couldn't destroy the Earth, the problem-at-hand was explaining this to the public. If the authors decided just by eyeballing the problem that the RHIC couldn't destroy the Earth, then the only actual analysis that took place was conducted in 5 seconds. Yes, it's a lopsided issue, but it seems that as a general matter of policy, any existential risk at all deserves a longer and truly curious analysis than that.
Though I don't really blame the RHIC Review's authors. No one ever told them that there was such a thing as existential risk, or that it raised the standards of analysis beyond what was usual in a scientific paper, or that rational analysis requires placing yourself into a state of genuine uncertainty about each individual element's exact value...
And the much greater reason I don't blame them, is that between the 1940s and today, society has developed a "Gotcha!" attitude toward risk.
You can't admit a single particle of uncertain danger if you want your science's funding to survive. These days you are not allowed to end by saying, "There remains the distinct possibility..." Because there is no debate you can have about tradeoffs between scientific progress and risk. If you get to the point where you're having a debate about tradeoffs, you've lost the debate. That's how the world stands, nowadays.
So no one can do serious analysis of existential risks anymore, because just by asking the question, you're threatening the funding of your whole field.
The number one lesson I take from this whole issue is that where human-caused uncertain existential dangers are concerned, the only way to get a real, serious, rational, fair, evenhanded assessment of the risks, in our modern environment,
Is if the whole project is classified, the paper is written for scientists without translation, and the public won't get to see the report for another fifty years.
This is the lesson of LA-602: Ignition of the Atmosphere with Nuclear Bombs and the Review of Speculative "Disaster Scenarios" at RHIC. Read them and weep.