denkenberger

Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 134 publications (>4400 citations, >50,000 downloads, h-index = 34, second most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, and University College London.

Wiki Contributions

Comments

Sorted by

Why does the chart not include energy? Prepared meals in grocery stores cost more, so their increased prevalence would be part of the explanation. Also, grains got more expensive in the last 20 years partly due to increased use in biofuels.

As I mentioned, the mass scaling was lower than the 3rd power (also because the designs went from fixed to variable RPM and blade pitch, which reduces loading), so if it were lower than 2.4, that would mean larger wind turbines would use slightly lower mass per energy produced. But the main reason for large turbines is lower construction and maintenance labour per energy produced (this is especially true for offshore turbines where maintenance is very expensive).

You could build one windmill per Autofac, but the power available from a windmill scales as the fifth power of the height, so it probably makes sense for a group of Autofacs to build one giant windmill to serve them all.

The swept area of a wind turbine scales as the second power of the height (assuming constant aspect ratios), and the velocity of wind increases with ~1/7 power with height. Since the power goes with the third power of the velocity, that means overall power ~height^2.4. The problem is that the amount of material required scales roughly with the 3rd power of the height. This would be exactly the case with constant aspect ratios. The actual case and the scale up of wind turbines over the last few decades has not scaled that fast, partly because of higher strength materials and partly because of optimization. Anyway, I agree there are economies of scale from micro wind turbines, but they aren't that large from a material perspective (mostly driven by labour savings).

Data centers running large numbers of AI chips will obviously run them as many hours as possible, as they are rapidly depreciating and expensive assets. Hence, each H100 will require an increase in peak powergrid capacity, meaning new power plants.


My comment here explains how the US could free up greater than 20% of current electricity generation for AI, and my comment here explains how the US could produce more than 20% extra electricity with current power plants. Yes, duty cycle is an issue, but backup generators (e.g. at hospitals) could come on during peak demand if the price is high enough to ensure that the chips could run continuously.

If you pair solar with compressed air energy storage, you can inexpensively (unlike chemical batteries) get to around 75% utilization of your AI chips (several days of storage), but I’m not sure if that’s enough, so natural gas would be good for the other ~25% (windpower is also anticorrelated with solar both diurnally and seasonally, but you might not have good resources nearby).

Natural gas is a fact question. I have multiple sources who confirmed Leopold’s claims here, so I am 90% confident that if we wanted to do this with natural gas we could do that. I am 99%+ sure we need to get our permitting act together, and would even without AI as a consideration…

A key consideration is that if there is not time to build green energy including fission, and we must choose, then natural gas (IIUC) is superior to oil and obviously vastly superior to coal.


My other comment outlined how >20% of US electricity could be freed up quickly by conservation driven by high electricity prices. The other way the US could get >20% of current US electricity for AI without building new power plants is running the ones we have more. This can be done quickly for natural gas by taking it away from other uses (the high price will drive conservation). There are not that many other uses for coal, but agricultural residues or wood could potentially be used to co-fire in coal power plants. If people didn’t mind spending a lot of money on electricity, petroleum distillates could be diverted to some natural gas power plants.

How are we getting the power? Most obvious way is to displace less productive industrial uses but we won’t let that happen. We must build new power. Natural gas. 100 GW will get pretty wild but still doable with natural gas. 

If we let the price of electricity go up, we would naturally get conservation across residential, commercial, and industrial users. There are precedents for this, such as Juneau Alaska losing access to its hydropower plant and electricity getting ~6 times as expensive and people reducing consumption by 25%. Now of course people will complain and then they would support much more building, but we don’t have to do the building first to get 20% of current electricity production for AI.

For those thinking about carbon, doing it in America with natural gas emits less carbon than doing it in the UAE where presumably you are using oil. Emissions are fungible. If you say ‘but think of our climate commitments’ and say that it matters where the emissions happen, you are at best confusing the map for the territory.

Though there are instances in the Middle East of using oil for electric power, this only happens because of massive subsidies. The true cost is extremely expensive electricity, so I think UAE would be using natural gas.


 

Thanks for digging into the data! I agree that the rational response should be if you are predisposed to a problem to actively address the problem. But I still think a common response would be one of fatalism and stress. Have you looked into other potential sources of the nocebo effect? Maybe people being misdiagnosed with diseases that they don't actually have?

You might say that the persistence of witch doctors is weak evidence of the placebo effect. But I would guess that the nocebo effect (believing something is going to hurt you) would be stronger. This is because stress takes years off people's lives. The Secret of Our Success cited a study of the Chinese belief that birth year affects diseases and lifespan. Chinese people living in the US who had the birth year associated with cancer lived ~four years shorter than other birth years.

I did have some probability mass on AI boxing being relevant. And I still have some probability mass that there will be sudden recursive self-improvement. But I also had significant probability mass on AI being economically important, and therefore very visible. And with an acceleration of progress, I thought many people would be concerned about it. I don’t know as I would’ve predicted a particular chat-gpt moment (I probably would have guessed some large AI accident), but the point is that we should have been ready for a case when the public/governments became concerned about AI. I think the fact that there were some AI governance efforts before chat-gpt was due in large part to the people saying there could be slow take off, like Paul.

Load More