LESSWRONG
LW

1725
denkenberger
35841590
Message
Dialogue
Subscribe

Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 152 publications (>5100 citations, >60,000 downloads, h-index = 36, most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, Australian National University and University College London.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
AI #132 Part 2: Actively Making It Worse
denkenberger3d10

I can strongly confirm that few of the people worried about AI killing everyone, or EAs that are so worried, favor a pause in AI development at this time, or supported the pause letter or took other similar actions.

An especially small percentage (but not zero!) would favor any kind of unilateral pause, either by Anthropic or by the West, without the rest of the world.

>Holly Elmore (PauseAI): It's kinda sweet that PauseAI is so well-represented on twitter that a lot of people >think it *is* the EA position. Sadly, it isn't.

>The EAs want Anthropic to win the race. If they wanted Anthropic paused, Anthropic would kick those >ones out and keep going but it would be a blow.

 

I tried to get at this issue with polls on EA Forum and LW. For EAs, 26% want to stop or pause AI globally, 13% want to pause it even if only done unilaterally. I would not call this an especially small percentage.

My summary for EAs was: "13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab. So if I had to summarize the median respondent, it would be strong regulation for AI or pause if a particular event/threshold is met. There appears to be more evidence for the claim that EA wants AI to be paused/stopped than for the claim that EA wants AI to be accelerated."

My summary for LW was: "the strongest support is for pausing AI now if done globally, but there's also strong support for making AI progress slow, pausing if disaster, and pausing if greatly accelerated progress. There is only moderate support for shutting AI down for decades, and near zero support for pausing if high unemployment, pausing unilaterally, and banning AI agents. There is strong opposition to never building AGI. Of course there could be large selection bias (with only ~30 people voting), but it does appear that the extreme critics saying rationalists want to accelerate AI in order to live forever are incorrect, and also the other extreme critics saying rationalists don't want any AGI are incorrect. Overall, rationalists seem to prefer a global pause either now or soon."

Reply
AI #132 Part 2: Actively Making It Worse
denkenberger5d10

The replies are full of people pointing out the ‘two grids’ claim is simply not true. Why is the Secretary of Energy coming out, over and over again, with this bold anti-energy stance backed by absurdly false claims and arguments?

Solar power and batteries are the future unless and until we get a big breakthrough. If we are sabotaging American wind and solar energy, either AGI shows up quickly enough to bail us out, our fusion energy projects bear fruit and hyperscale very quickly or we are going to lose. Period. 

 

Intermittent renewable energy alone does require a grid to support it. It is possible that wind and solar can be cheaper than the variable cost of conventional power plants, but it's not yet in most places without subsidy. One could theoretically replace the current system with wind plus solar plus batteries, but it would be crazy expensive. Either you have to build the wind and solar far larger and waste most of the energy and still need batteries overnight, or you need something like days of battery storage, which is very expensive. Now you could use the excess electricity from the overbuilding scenario to make hydrogen, but hydrogen is also a long way from being economical. So the thing we could do economically at current prices is pumped hydropower for storage (geographically constrained) or underground compressed air energy storage (somewhat geographically constrained, but saline aquifers are very common and the US stores a lot of natural gas seasonally that way). These have low enough storage cost to be feasible for days worth of storage. Or we could do fission (yes, I know, public perception and regulations, but it's not clear that fusion would be much better).

Reply
Transportation as a Constraint
denkenberger5d30

Thanks - I did not know that. Alexander the Great's ships would have had oars, but I guess it wasn't enough.

Reply
Transportation as a Constraint
denkenberger6d30

If the wind is the wrong direction, sail ships can tack. The big problem is if there is no wind at all.

Reply
The Cats are On To Something
denkenberger9d20

I don't think you can claim that wildcats of the stone age would be pleased with what we've done to domestic cats either, sticking them in tiny territories where they cannot roam, kingdoms of a cage. I'm not sure using human judgement in this matter is very useful as we don't have a good concept of what other species value.

Just one factor, but the life expectancy of domestic dogs and cats is generally higher than their wild progenitors. I agree we can't know for sure, but I would guess this with limitless food and good healthcare, and less worry about being attacked at night would mean the subjective wellbeing of domesticated cats and dogs is higher than the wild ones, despite less freedom.

Reply
Yudkowsky on "Don't use p(doom)"
denkenberger12d30

(I think that it's common for AI safety people to talk too much about totally quashing risks rather than reducing them, in a way that leads them into unproductive lines of reasoning.)

 

Especially because we need to take into account non-AI X-risks. So maybe "What is the AI policy that would most reduce X-risks overall?" For people with lower P(X-risk|AGI) (if you don't like P(doom)), longer timelines, and/or more worried about other X-risks, the answer may be do nothing or even accelerate AI (harkening back to Yudkowsky's "Artificial Intelligence as a Positive and Negative Factor in Global Risk".

Reply1
Female sexual attractiveness seems more egalitarian than people acknowledge
denkenberger13d21

Related survey.

Reply
Underdog bias rules everything around me
denkenberger20d10

I think another example of both sides thinking they are the underdog are environmentalists versus nuclear/agricultural (GMOs, pesticides, and artificial fertilizers)/fossil fuel companies.

Reply
Four ways learning Econ makes people dumber re: future AI
denkenberger21dΩ232

I appreciated the attention to detail, e.g. Dyson Swarm instead of Dyson Sphere, and googol instead of google. Maybe I missed it, but I think a big one is that economists typically only look back 100 or so years so they have a strong prior of roughly constant growth rates. Whereas if you look back further, it really does look like an explosion.

Reply
Could one country outgrow the rest of the world?
denkenberger21d10

Very interesting analysis.

Second, the company acquires >50% of the world’s physical capital.

I don't think this would change your argument too much, but it seems that if you had lots of skilled labor, you would not actually need greater than 50% of the world's physical capital to outgrow the rest of the world. 

Reply
Load More
13Poll on De/Accelerating AI
1mo
38
4Graphing AI economic growth rates, or time to Dyson Swarm
3mo
2
2Relocation triggers
3mo
0
43ALLFED emergency appeal: Help us raise $800,000 to avoid cutting half of programs
5mo
9
31Secular Solstice for children
3y
1
4Should we be spending no less on alternate foods than AI now?
8y
4