Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 152 publications (>5100 citations, >60,000 downloads, h-index = 36, most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, Australian National University and University College London.
More manual workers: ~2x
Globally, manual workers far outnumber knowledge workers. However, it's true that you could lure most subsistence farmers to factories with high wages and then use tractors to farm the land (demand for food would go way up with the increased global incomes, though this might be mitigated eventually with plant based/cultivated meat). I think many knowledge workers would take lower unemployment/UBI rather than becoming manual workers. So I still doubt you could double the manual workforce quickly.
How many times production must double to halve the cost
Moore’s Law: 0.2 Bloom et al (2020), Table 7
I got directed to “The Fall of the Labor Share and the Rise of Superstar Firms” which doesn’t have a table 7. AI says, “For transistors and integrated circuits, the cost reduction typically follows an experience curve where costs decrease by approximately 20-30% for every cumulative doubling of production volume.” This would be ~2 doublings to halve the cost, and is more consistent with my understanding. It's closer to the other examples. The difference is the that cumulative number of transistors doubles much faster per calendar year, and therefore the cost per transistor falls much faster per calendar year. If it only has to double 0.2 times to halve the cost, that would be 97% cost reduction for every cumulative doubling.
I think you make a number of good points, and just being able to move more energy and matter probably does not make us safer.
I do think a future humanity, having successfully solved such problems, would have a better chance of also successfully coordinating to not build AGI prematurely, and also less pressing need for the economic growth from racing ahead to do so.
Yes, solving aging before ASI would dramatically reduce the urgency that many people feel for racing to ASI. But the likely increase in anthropogenic X risks would dwarf the fact that we could prevent natural X risks.
I think if the ASI(s) arose in a datacenter in orbit, there are some scenarios that it could be beneficial, like if there were AI-AI conflict. I think regardless it will pretty quickly become not dependent on humans for survival.
I think Paul Christiano's argument of continuous progress being safer is that society will be significantly less vulnerable to ASI because it will have built up defenses ahead of time. I think that makes sense if it is continuous all the way. But my intuition is that even if we get some continuous development (e.g. a few orders of magnitude of AI-driven economic growth), that probably means more sophisticated AI defenses, and that would give us a little bit more protection against ASI.
You know what they say about the good old days? They are a product of a bad memory.
Seriously, is there nowhere in America we can make this happen at scale? If we wanted to, we could do this ourselves easily. We have the natural gas, even if nuclear would be too slow to come online.
It's only 5 GW, and the US average is ~440 GW. The US would not have to build any more power plants - just run the ones it has more. It could just reduce liquefied natural gas exports and produce another >25 GW electrical average.
Yes, I was thinking of adding that it could appeal to contrarians who may be attracted to a book with a title they disagreed with. As for people who don't have a strong opinion coming in, I can see some people being attracted to an extreme title. And I get that titles need to be simple. I think a title like "If anyone builds it, we lose control" would be more defensible. But I think the probability distributions from Paul Christiano are more reasonable.
aren't sold on the literal stated-with-certainty headline claim, "If anyone builds it, everyone dies."
Unfortunately, the graphic below does not have the simple case of stating something, but I'm interested in people's interpretation of the confidence level. I think a reasonable starting point is interpreting it as 90% confidence. I couldn't quickly find what percent of AI safety researchers have 90% confidence in extinction (not just catastrophe or disempowerment), but it's less than 1% in the AI Impacts survey including safety and capabilities researchers. I couldn't find it for the public. Still, I think almost everyone will just bounce off this title. But I understand that's what the authors believe, and perhaps it could have influence on the relatively few existing extreme doomers in the public?
Edited to add: After writing this, I asked perplexity what P(doom) someone should have to be called an extreme doomer, and it said 90%+ and mentioned Yud. Of course extreme doesn't necessarily mean wrong. And since there only needs to be about 10,000 copies sold in a week to be a NYT bestseller, that very well could happen even if 99% of people bounce off the title.
That's a good point about the space taken up. Even outside of expensive cities, construction cost is ~$150/ft2 ($1500/m2), so not even counting lost space around the unit, the cost of the floor space is likely higher than the cost of the unit if put on the floor. You got impressive results with the ceiling fan. We are working on a project to estimate the scale up speed of in-room air filtration in an engineered pandemic. It's focused on vital industries, but there are often ceiling fans there. A big advantage of in-room filtration over masks is intelligibility, but noise can interfere with that as well (though at least you have the lip cues advantage).
Yes, I found Worldmapper very enlightening when I discovered its historic population/wealth/etc visualizations in ~2007.
Things like primary metals and chemical manufacturing typically already run 24/7, so I don't think that AI instructed workers would increase output very much. Mining is also mostly 24/7, but there may be more room to increase output with better workers. o3 estimates light (finished goods) manufacturing runs about 100 hours per week, so there is room for more shifts and faster assembly lines, but you would need to redesign the products to be much less material consumptive in order to get a big increase in good production with the ~same amount of material going in, and this would require changing the equipment. But I do think that switching from making vehicles to making robots would be a large increase in the value produced. And you could argue with the massive increase in wealth that the price of goods will go way up. But I don't think that's what you are getting at with your Y-axis label of physical capabilities. I think there is a lot more room for productivity speedups on the repair and maintenance side. And this is important because when we switch to producing high value products like robots, that would mean we be producing many fewer new cars and appliances. I think we would produce robots to drive existing cars, rather than producing new self driving cars, because robots require so much less material. We may even produce domestic robots that wash dishes and clothes by "hand" for multiple houses given the shortage in clothes washers and dishwashers. This is because if wages for manual workers went up a lot worldwide, there would be tremendous demand for appliances. But one way of doing this for clothes is most people using a laundromat. There would also be a lot of demand for new buildings, but buildings are resource intensive, so we probably could not build that much more in just a few years. The steel would probably be diverted towards robot production, but we still could use wood and (unreinforced) concrete for buildings. Road building would probably also be reduced as steel reinforcement is needed for concrete roads.
So overall, if we just try to produce more of the same things, at current prices, I think the 10x speedup just from AI-directed workers is not feasible. However, with the substitution with much higher value products like robots, I think it would be feasible.
By the way, it looks like your curve of growth is most similar to Hanson's here (though presumably you think it will happen a lot sooner).