Hanson is the most obvious answer, to me.
EDIT: Note, I don't think these people have given explicit probabilities. But they seem much less worried than people from the AI alignment community.
EDIT^2: Also, only the links to Hanson and Jacob's stuff have comparable detail to what you requested.
Bryan Caplan is one. Tyler Cowen too, if you take his claims of nuclear war being a much greater large-scale risk by far seriously and assign standard numbers for that. I think David Friedman might agree, though I'll get back to you on that. Geoffery Hinton seems more worried about autonomous machines than AI taking over. He thinks deep learning will be enough, but quite a few more conceptual breakthroughs on the order of transformers will be needed.
Maybe Jacob Cannell? He seems quite optimistic that alignment is on track to be solved. Though I doubt his P(doom) is less than 1%.
Strong disagree. Hanson believes that there's more than a 1% chance of AI destroying all value.
Even if he didn't see an inside view argument, he makes an outside view argument about the Great Filter.
He probably believes that there's a much larger chance of it killing everyone, and his important disagreement with Yudkowsky is that thinks that it will have value in itself, rather than be a paperclip maximizer. In particular, in the Em scenario, he argues that property rights will keep humans alive for 2 years. Maybe you should read that as <1% of al...
If you're willing to relax the "prominent" part of "prominent reasonable people", I'd suggest myself. I think our odds of doom are < 5%, and I think that pretty much all the standard arguments for doom are wrong. I've written in specific about why I think the "evolution failed to align humans to inclusive genetic fitness" argument for doom via inner misalignment is wrong here: Evolution is a bad analogy for AGI: inner alignment.
I'm also a co-author on the The Shard Theory of Human Values sequence, which takes a more optimistic perspective than many other alignment-related memetic clusters, and disagrees with lots of past alignment thinking. Though last I checked, I was one of the most optimistic of the Shard theory authors, with Nora Belrose as a possible exception.
I'm not sure Jan would endorse "accelerating capabilities isn't bad." Also I doubt Jan is confident AI won't kill everyone. I can't speak for him of course, maybe he'll show up & clarify.
Broadly, he predicts AGI to be animalistic ("learning disabled toddler"), rather than a consequentialist laser beam, or simulator.
Hmmm...
Ben Garfinkel? https://www.effectivealtruism.org/articles/ea-global-2018-how-sure-are-we-about-this-ai-stuff
Katja Grace? https://worldspiritsockpuppet.com/2022/10/14/ai_counterargs.html
Scott Aaronson? https://www.lesswrong.com/posts/Zqk4FFif93gvquAnY/scott-aaronson-on-reform-ai-alignment
I don't know if any of these people would be confident AI won't kill everyone, but they definitely seem to be smart/reasonable and disagreeing with the standard LW views.
https://www.rudikershaw.com/articles/ai-doom-isnt-coming
https://idlewords.com/talks/superintelligence.htm
https://kk.org/thetechnium/the-myth-of-a-superhuman-ai/
https://arxiv.org/abs/1702.08495v1
https://curi.us/blog/post/1336-the-only-thing-that-might-create-unfriendly-ai
https://www.popsci.com/robot-uprising-enlightenment-now
This one's tongue-in-cheek:
https://arxiv.org/abs/1703.10987
Update 1:
https://www.theregister.com/2015/03/19/andrew_ng_baidu_ai/
Update 2:
Katja Grace gives quite good counterarguments about AI risk.
https://www.lesswrong.com/posts/LDRQ5Zfqwi8GjzPYG/counterarguments-to-the-basic-ai-x-risk-case
Thanks for the links! Net bounty: $30. Sorry! Nearly all of them fail my admittedly-extremely-subjective "I subsequently think 'yeah, that seemed well-reasoned'" criterion.
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest / as a costly signal of having engaged, I'll publicly post my reasoning on each. (Not posting in order to argue, but if you do convince me that I unfairly dismissed any of them, such that I should have originally awarded a bounty, I'll pay triple.)
(Re-reading this, I notice that my "re...
When it comes to "accelerating AI capabilities isn't bad" I would suggest Kaj Sotala and Eric Drexler with his QNR and CAIS. Interestingly, Drexler has recently left AI safety research and gone back to atomically precise manufacturing due to him now worrying less about AI risk more generally. Chris Olah also believes that interpretability-driven capabilities advances are not bad in that the positives outweight the negatives for AGI safety.
For more general AI & alignment optimism I would suggest also Rohin Shah. See also here.
Thanks for the link!
Respectable Person: check. Arguing against AI doomerism: check. Me subsequently thinking, "yeah, that seemed reasonable": no check, so no bounty. Sorry!
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest, I'll post my reasoning publicly. These three passages jumped out at me as things that I don't think would ever be written by a person with a model of AI that I remotely agree with:
...Popper's argument implies that all thinking entities--human or not, biological or artificial--must
+ 1 for Katja Grace (even though their probability may be >1%, they have some really good arguments)
Ben Garfinkel in response to Joe Carlsmith: https://docs.google.com/document/u/0/d/1FlGPHU3UtBRj4mBPkEZyBQmAuZXnyvHU-yaH-TiNt8w/mobilebasic
Boaz Barak & Ben Edelman: https://www.lesswrong.com/posts/zB3ukZJqt3pQDw9jz/ai-will-change-the-world-but-won-t-take-it-over-by-playing-3
Thanks for the collection! I wouldn't be surprised if it links to something that tickles my sense of "high-status monkey presenting a cogent argument that AI progress is good," but didn't see any on a quick skim, and there are too many links to follow all of them; so, no bounty, sorry!
Here's Peter Thiel making fun of the rationalist doomer mindset in relation to AI, explicitly calling out both Eliezer and Bostrom as "saying nothing": https://youtu.be/ibR_ULHYirs
The relevant section seems to be 26:00-32:00. In that section, I, uh... well, I perceive him as just projecting "doomerism is bad" vibes, rather than making an argument containing falsifiable assertions and logical inferences. No bounty!
Francois Chollet on the implausibility of intelligence explosion :
https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec
Respectable Person: check. Arguing against AI doomerism: check. Me subsequently thinking, "yeah, that seemed reasonable": no check, so no bounty. Sorry!
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest, I'll post my reasoning publicly. His arguments are, roughly:
Thanks for the link!
Respectable Person: check. Arguing against AI doomerism: check. Me subsequently thinking, "yeah, that seemed reasonable": no check, so no bounty. Sorry!
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest, I'll post my reasoning publicly. If I had to point at parts that seemed unreasonable, I'd choose (a) the comparison of [X-risk from superintelligent AIs] to [X-risk from bacteria] (intelligent adversaries seem obviously vastly more worrisome to me!) and (b) "why would I... want ...
No bounty, sorry! I've already read it quite recently. (In fact, my question linked it as an example of the sort of thing that would win a bounty. So you show good taste!)
Meta: I agree that looking at arguments for different sides is better than only looking at arguments for one side; but
[...] neutralizing my status-yuck reaction. One promising-seeming approach is to spend a lot of time looking at lots of of high-status monkeys who believe it!
sounds like trying to solve the problem by using more of the problem? I think it's worth flagging that {looking at high-status monkeys who believe X} is not addressing the root problem, and it might be worth spending some time on trying to understand and solve the root problem.
I'm sad to say that I myself do not have a proper solution to {monkey status dynamics corrupting ability to think clearly}. That said, I do sometimes find it helpful to thoroughly/viscerally imagine being an alien who just arrived on Earth, gained access to rvnnt's memories/beliefs, and is now looking at this whole Earth-circus from the perspective of a dispassionately curious outsider with no skin in the game.
If anyone has other/better solutions, I'd be curious to hear them.
Bounty [closed]: $30 for each link that leads to me reading/hearing ~500 words from a Respectable Person arguing, roughly, "accelerating AI capabilities isn't bad," and me subsequently thinking "yeah, that seemed pretty reasonable." For example, linking me to nostalgebraist or OpenAI's alignment agenda or this debate.[1] Total bounty capped at $600, first come first served. All bounties (incl. the total-bounty cap) doubled if, by Jan 1, I can consistently read people expressing unconcern about AI and not notice a status-yuck reaction.
Context: I notice that I've internalized a message like "thinking that AI has a <1% chance of killing everyone is stupid and low-status." Because I am a monkey, this damages my ability to consider the possibility that AI has a <1% chance of killing everyone, which is a bummer, because my beliefs on that topic affect things like whether I continue to work at my job accelerating AI capabilities.[2]
I would like to be able to consider that possibility rationally, and that requires neutralizing my status-yuck reaction. One promising-seeming approach is to spend a lot of time looking at lots of of high-status monkeys who believe it!
Bounty excludes things I've already seen, and things I would have found myself based on previous recommendations for which I paid bounties (for example, other posts by the same author on the same web site).
Lest ye worry that [providing links to good arguments] will lead to [me happily burying my head in the sand and continuing to hasten the apocalypse] -- a lack of links to good arguments would move much more of my probability-mass to "Less Wrong is an echo chamber" than to "there are basically no reasonable people who think advancing AI capabilities is good."