Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 156 publications (>5600 citations, >60,000 downloads, h-index = 38, most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, Radio New Zealand, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, Australian National University and University College London.
I agree that scaling up ahead of time would be the best. One possibility might be convincing fluorescent bulb manufacturers to advocate for stricter indoor air quality standards, which could be partly met by converting production to UV. This could save the fluorescent factories from being shut down because fluorescents are going to be banned in a couple years in most applications.
But since we don't have widespread use or stockpiles, I think we need to have a backup plan for fast scale up in case the pandemic hits soon.
Don’t ride motorcycles, avoid extreme sports, snow sports and mountaineering, beware long car rides. The younger you are, the more this likely holds.
Also don't live in NATO cities because of nuclear war threat, and ideally live in places that would likely do better in an extreme pandemic, or be ready to relocate if one occurs.
I thought the diagram was very helpful. It looks like the integral of intensity and depth of live skin cells is about an order of magnitude higher for 254 nm (inexpensive and efficient mercury discharge). So you're saying that the fact that far UVC is more strongly absorbed in the first 10 microns, that makes the inactivation rate of bacteria and viruses higher?
I'm not sure if this changes things, but the probabilities of the OP were reversed:
If there was a button that would kill me with a 60% probability and transport me into a utopia for billions of years with a 15% probability, I would feel very scared to press that button, despite the fact that the expected value would be extremely positive compared to living a normal life.
I feel your pain. After many rejections, I've managed to get about 10 papers through peer review on transformative AI, so it is possible! Honestly, I think publishing on resilience to nuclear winter is even worse. Best of luck!
However, a decade and a half after those first demo drives, Waymo has finally hit a point where the error rate is so low that it’s possible to pull the human safety monitor out of the car completely. Suddenly you have a new kind of post-labor business model that’s potentially much more valuable - an autonomous fleet that can run 24 hours a day with minimal labor costs and with perfectly consistent service and safe driving. This corresponds to the second bend in the graph.
They pulled the human safety monitor out of the car, but I think humans are still doing work remotely (each were monitoring 15-20 cars as of 2023 at Cruise). But that can still be consistent with minimal labor costs.
Here's the equivalent poll for LessWrong. And here's my summary:
"Big picture: the strongest support is for pausing AI now if done globally, but there's also strong support for making AI progress slow, pausing if disaster, pausing if greatly accelerated progress. There is only moderate support for shutting AI down for decades, and near zero support for pausing if high unemployment, pausing unilaterally, and banning AI agents. There is strong opposition to never building AGI. Of course there could be large selection bias (with only ~30 people voting), but it does appear that the extreme critics saying rationalists want to accelerate AI in order to live forever are incorrect, and also the other extreme critics saying rationalists don't want any AGI are incorrect. Overall, rationalists seem to prefer a global pause either now or soon."
- Heuristic C: “If something has a >10% chance of killing everyone according to most experts, we probably shouldn’t let companies build it.”
IMO, it’s hard to get a consensus for Heuristic C at the moment even though it kind of seems obvious. It’s even hard for me to get my own brain to care wholeheartedly about this heuristic, to feel its full force, without a bunch of “wait, but …”.
Heuristic F: “Give serious positive consideration to any technology that many believe might save billions of lives.”
That’s a big consideration for short/medium termists. Could another heuristic (for the longtermists) be Maxipok (maximize the probability of an OK outcome)? By Bostrom’s definition of X risk, a permanent pause is an X catastrophe. So if one thought the probability of the pause becoming permanent was greater than p(X catastrophe|AGI), then a pause would not make sense. Even if one thought there were no chance of a pause becoming permanent, if one thought the background X risk per year was greater than the reduction in p(X risk|AGI) for every year of pause, it would also not make sense to pause from a longtermist perspective. Putting these together, it’s not clear that p(X risk|AGI) ~10% should result in companies not being allowed to build it (though stronger regulation could very well make sense).
I can strongly confirm that few of the people worried about AI killing everyone, or EAs that are so worried, favor a pause in AI development at this time, or supported the pause letter or took other similar actions.
An especially small percentage (but not zero!) would favor any kind of unilateral pause, either by Anthropic or by the West, without the rest of the world.
>Holly Elmore (PauseAI): It's kinda sweet that PauseAI is so well-represented on twitter that a lot of people >think it *is* the EA position. Sadly, it isn't.
>The EAs want Anthropic to win the race. If they wanted Anthropic paused, Anthropic would kick those >ones out and keep going but it would be a blow.
I tried to get at this issue with polls on EA Forum and LW. For EAs, 26% want to stop or pause AI globally, 13% want to pause it even if only done unilaterally. I would not call this an especially small percentage.
My summary for EAs was: "13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab. So if I had to summarize the median respondent, it would be strong regulation for AI or pause if a particular event/threshold is met. There appears to be more evidence for the claim that EA wants AI to be paused/stopped than for the claim that EA wants AI to be accelerated."
My summary for LW was: "the strongest support is for pausing AI now if done globally, but there's also strong support for making AI progress slow, pausing if disaster, and pausing if greatly accelerated progress. There is only moderate support for shutting AI down for decades, and near zero support for pausing if high unemployment, pausing unilaterally, and banning AI agents. There is strong opposition to never building AGI. Of course there could be large selection bias (with only ~30 people voting), but it does appear that the extreme critics saying rationalists want to accelerate AI in order to live forever are incorrect, and also the other extreme critics saying rationalists don't want any AGI are incorrect. Overall, rationalists seem to prefer a global pause either now or soon."
What about nuclear war? I think a pre-emptive strike is plausible if one country may get power over the world with aligned AI.