Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters ( and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 134 publications (>4400 citations, >50,000 downloads, h-index = 34, second most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo,, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, and University College London.

Wiki Contributions


Data centers running large numbers of AI chips will obviously run them as many hours as possible, as they are rapidly depreciating and expensive assets. Hence, each H100 will require an increase in peak powergrid capacity, meaning new power plants.

My comment here explains how the US could free up greater than 20% of current electricity generation for AI, and my comment here explains how the US could produce more than 20% extra electricity with current power plants. Yes, duty cycle is an issue, but backup generators (e.g. at hospitals) could come on during peak demand if the price is high enough to ensure that the chips could run continuously.

If you pair solar with compressed air energy storage, you can inexpensively (unlike chemical batteries) get to around 75% utilization of your AI chips (several days of storage), but I’m not sure if that’s enough, so natural gas would be good for the other ~25% (windpower is also anticorrelated with solar both diurnally and seasonally, but you might not have good resources nearby).

Natural gas is a fact question. I have multiple sources who confirmed Leopold’s claims here, so I am 90% confident that if we wanted to do this with natural gas we could do that. I am 99%+ sure we need to get our permitting act together, and would even without AI as a consideration…

A key consideration is that if there is not time to build green energy including fission, and we must choose, then natural gas (IIUC) is superior to oil and obviously vastly superior to coal.

My other comment outlined how >20% of US electricity could be freed up quickly by conservation driven by high electricity prices. The other way the US could get >20% of current US electricity for AI without building new power plants is running the ones we have more. This can be done quickly for natural gas by taking it away from other uses (the high price will drive conservation). There are not that many other uses for coal, but agricultural residues or wood could potentially be used to co-fire in coal power plants. If people didn’t mind spending a lot of money on electricity, petroleum distillates could be diverted to some natural gas power plants.

How are we getting the power? Most obvious way is to displace less productive industrial uses but we won’t let that happen. We must build new power. Natural gas. 100 GW will get pretty wild but still doable with natural gas. 

If we let the price of electricity go up, we would naturally get conservation across residential, commercial, and industrial users. There are precedents for this, such as Juneau Alaska losing access to its hydropower plant and electricity getting ~6 times as expensive and people reducing consumption by 25%. Now of course people will complain and then they would support much more building, but we don’t have to do the building first to get 20% of current electricity production for AI.

For those thinking about carbon, doing it in America with natural gas emits less carbon than doing it in the UAE where presumably you are using oil. Emissions are fungible. If you say ‘but think of our climate commitments’ and say that it matters where the emissions happen, you are at best confusing the map for the territory.

Though there are instances in the Middle East of using oil for electric power, this only happens because of massive subsidies. The true cost is extremely expensive electricity, so I think UAE would be using natural gas.


Thanks for digging into the data! I agree that the rational response should be if you are predisposed to a problem to actively address the problem. But I still think a common response would be one of fatalism and stress. Have you looked into other potential sources of the nocebo effect? Maybe people being misdiagnosed with diseases that they don't actually have?

You might say that the persistence of witch doctors is weak evidence of the placebo effect. But I would guess that the nocebo effect (believing something is going to hurt you) would be stronger. This is because stress takes years off people's lives. The Secret of Our Success cited a study of the Chinese belief that birth year affects diseases and lifespan. Chinese people living in the US who had the birth year associated with cancer lived ~four years shorter than other birth years.

I did have some probability mass on AI boxing being relevant. And I still have some probability mass that there will be sudden recursive self-improvement. But I also had significant probability mass on AI being economically important, and therefore very visible. And with an acceleration of progress, I thought many people would be concerned about it. I don’t know as I would’ve predicted a particular chat-gpt moment (I probably would have guessed some large AI accident), but the point is that we should have been ready for a case when the public/governments became concerned about AI. I think the fact that there were some AI governance efforts before chat-gpt was due in large part to the people saying there could be slow take off, like Paul.

I'm surprised no one has mentioned Paul's long support (e.g.) of continuous progress meaning slow takeoff. Of course there's Hanson as well.

Interesting - I was thinking it was going to be about the analogy with collapse of civilization and how far we might fall. Because I am concerned that if we have a loss of industrial civilization, we might not be able to figure out how to go back to subsistence farming, or even hunting and gathering (Secret of Our Success), so we may fall to extinction. But I think there are ways of not pulling up the ladder behind us in this case as well (planning for meeting basic needs in low tech ways).

I don't have a strong opinion because I think there's huge uncertainty in what is healthy. But for instance, my intuition is that a plant-based meat that had very similar nutritional characteristics as animal meat would be about as healthy (or unhealthy) as the meat itself. The plant-based meat would be ultra-processed. But one could think of the animal meat as being ultra-processed plants, so I guess one could think that that is the reason that animal meat is unhealthy?

Load More