LESSWRONG
LW

1453
denkenberger
378Ω641620
Message
Dialogue
Subscribe

Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 156 publications (>5600 citations, >60,000 downloads, h-index = 38, most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, Radio New Zealand, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, Australian National University and University College London.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
When Will AI Transform the Economy?
denkenberger1d10

However, a decade and a half after those first demo drives, Waymo has finally hit a point where the error rate is so low that it’s possible to pull the human safety monitor out of the car completely. Suddenly you have a new kind of post-labor business model that’s potentially much more valuable - an autonomous fleet that can run 24 hours a day with minimal labor costs and with perfectly consistent service and safe driving. This corresponds to the second bend in the graph.

 

They pulled the human safety monitor out of the car, but I think humans are still doing work remotely (each were monitoring 15-20 cars as of 2023 at Cruise). But that can still be consistent with minimal labor costs.

Reply
Which side of the AI safety community are you in?
denkenberger9d10

Here's the equivalent poll for LessWrong. And here's my summary:

"Big picture: the strongest support is for pausing AI now if done globally, but there's also strong support for making AI progress slow, pausing if disaster, pausing if greatly accelerated progress. There is only moderate support for shutting AI down for decades, and near zero support for pausing if high unemployment, pausing unilaterally, and banning AI agents. There is strong opposition to never building AGI. Of course there could be large selection bias (with only ~30 people voting), but it does appear that the extreme critics saying rationalists want to accelerate AI in order to live forever are incorrect, and also the other extreme critics saying rationalists don't want any AGI are incorrect. Overall, rationalists seem to prefer a global pause either now or soon."

Reply
Ethical Design Patterns
denkenberger24d10
  • Heuristic C:  “If something has a >10% chance of killing everyone according to most experts, we probably shouldn’t let companies build it.”

IMO, it’s hard to get a consensus for Heuristic C at the moment even though it kind of seems obvious. It’s even hard for me to get my own brain to care wholeheartedly about this heuristic, to feel its full force, without a bunch of “wait, but …”.

Heuristic F: “Give serious positive consideration to any technology that many believe might save billions of lives.”

 

That’s a big consideration for short/medium termists. Could another heuristic (for the longtermists) be Maxipok (maximize the probability of an OK outcome)? By Bostrom’s definition of X risk, a permanent pause is an X catastrophe. So if one thought the probability of the pause becoming permanent was greater than p(X catastrophe|AGI), then a pause would not make sense. Even if one thought there were no chance of a pause becoming permanent, if one thought the background X risk per year was greater than the reduction in p(X risk|AGI) for every year of pause, it would also not make sense to pause from a longtermist perspective. Putting these together, it’s not clear that p(X risk|AGI) ~10% should result in companies not being allowed to build it (though stronger regulation could very well make sense).

Reply
AI #132 Part 2: Actively Making It Worse
denkenberger2mo40

I can strongly confirm that few of the people worried about AI killing everyone, or EAs that are so worried, favor a pause in AI development at this time, or supported the pause letter or took other similar actions.

An especially small percentage (but not zero!) would favor any kind of unilateral pause, either by Anthropic or by the West, without the rest of the world.

>Holly Elmore (PauseAI): It's kinda sweet that PauseAI is so well-represented on twitter that a lot of people >think it *is* the EA position. Sadly, it isn't.

>The EAs want Anthropic to win the race. If they wanted Anthropic paused, Anthropic would kick those >ones out and keep going but it would be a blow.

 

I tried to get at this issue with polls on EA Forum and LW. For EAs, 26% want to stop or pause AI globally, 13% want to pause it even if only done unilaterally. I would not call this an especially small percentage.

My summary for EAs was: "13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab. So if I had to summarize the median respondent, it would be strong regulation for AI or pause if a particular event/threshold is met. There appears to be more evidence for the claim that EA wants AI to be paused/stopped than for the claim that EA wants AI to be accelerated."

My summary for LW was: "the strongest support is for pausing AI now if done globally, but there's also strong support for making AI progress slow, pausing if disaster, and pausing if greatly accelerated progress. There is only moderate support for shutting AI down for decades, and near zero support for pausing if high unemployment, pausing unilaterally, and banning AI agents. There is strong opposition to never building AGI. Of course there could be large selection bias (with only ~30 people voting), but it does appear that the extreme critics saying rationalists want to accelerate AI in order to live forever are incorrect, and also the other extreme critics saying rationalists don't want any AGI are incorrect. Overall, rationalists seem to prefer a global pause either now or soon."

Reply
AI #132 Part 2: Actively Making It Worse
denkenberger2mo30

The replies are full of people pointing out the ‘two grids’ claim is simply not true. Why is the Secretary of Energy coming out, over and over again, with this bold anti-energy stance backed by absurdly false claims and arguments?

Solar power and batteries are the future unless and until we get a big breakthrough. If we are sabotaging American wind and solar energy, either AGI shows up quickly enough to bail us out, our fusion energy projects bear fruit and hyperscale very quickly or we are going to lose. Period. 

 

Intermittent renewable energy alone does require a grid to support it. It is possible that wind and solar can be cheaper than the variable cost of conventional power plants, but it's not yet in most places without subsidy. One could theoretically replace the current system with wind plus solar plus batteries, but it would be crazy expensive. Either you have to build the wind and solar far larger and waste most of the energy and still need batteries overnight, or you need something like days of battery storage, which is very expensive. Now you could use the excess electricity from the overbuilding scenario to make hydrogen, but hydrogen is also a long way from being economical. So the thing we could do economically at current prices is pumped hydropower for storage (geographically constrained) or underground compressed air energy storage (somewhat geographically constrained, but saline aquifers are very common and the US stores a lot of natural gas seasonally that way). These have low enough storage cost to be feasible for days worth of storage. Or we could do fission (yes, I know, public perception and regulations, but it's not clear that fusion would be much better).

Reply
Transportation as a Constraint
denkenberger2mo30

Thanks - I did not know that. Alexander the Great's ships would have had oars, but I guess it wasn't enough.

Reply
Transportation as a Constraint
denkenberger2mo30

If the wind is the wrong direction, sail ships can tack. The big problem is if there is no wind at all.

Reply
The Cats are On To Something
denkenberger2mo20

I don't think you can claim that wildcats of the stone age would be pleased with what we've done to domestic cats either, sticking them in tiny territories where they cannot roam, kingdoms of a cage. I'm not sure using human judgement in this matter is very useful as we don't have a good concept of what other species value.

Just one factor, but the life expectancy of domestic dogs and cats is generally higher than their wild progenitors. I agree we can't know for sure, but I would guess this with limitless food and good healthcare, and less worry about being attacked at night would mean the subjective wellbeing of domesticated cats and dogs is higher than the wild ones, despite less freedom.

Reply
Yudkowsky on "Don't use p(doom)"
denkenberger2mo30

(I think that it's common for AI safety people to talk too much about totally quashing risks rather than reducing them, in a way that leads them into unproductive lines of reasoning.)

 

Especially because we need to take into account non-AI X-risks. So maybe "What is the AI policy that would most reduce X-risks overall?" For people with lower P(X-risk|AGI) (if you don't like P(doom)), longer timelines, and/or more worried about other X-risks, the answer may be do nothing or even accelerate AI (harkening back to Yudkowsky's "Artificial Intelligence as a Positive and Negative Factor in Global Risk".

Reply1
Female sexual attractiveness seems more egalitarian than people acknowledge
denkenberger2mo21

Related survey.

Reply
Load More
13Poll on De/Accelerating AI
3mo
38
4Graphing AI economic growth rates, or time to Dyson Swarm
4mo
2
2Relocation triggers
5mo
0
49ALLFED emergency appeal: Help us raise $800,000 to avoid cutting half of programs
7mo
9
31Secular Solstice for children
3y
1
4Should we be spending no less on alternate foods than AI now?
8y
4