I recently read What do ML researchers think about AI in 2022.


The probability of Doom is sub-10%. Which is high, but as I understand it, in the minds of people like Eliezer Yudkowsky, we're more likely doomed than not. 


I personally lean towards Yudkowsky's views, because
- I don't believe human/evolution-selected minds have thinking power that a machine could not have
- I believe in the Orthogonality Thesis
(I think that those two questions can be defended empirically)
- I think it is easier to make a non-aligned machine than an aligned one
(I believe that research currently being carried out strongly hints at the fact that this is true)
- I believe that more people are working on non-aligned AI than on aligned AI
- I think it would be very hard politically to stop all AI research and successfully prevent anyone from researching it / to implement a worldwide ban on AI R&D.


Given all this (and probably other observations that I made), I think we're doomed.
I feel my heart beating hard, when I think to myself I have to give a number.
I imagine I'm bad at it, it'll be wrong, it's more uncomfortable/inconvenient than just saying "we're fucked" without any number, but here goes anyway-
I'd say that we're 
(my brain KEEPS on flinching away from coming up with a number, I don't WANT to actually follow through on all my thoughts and observations about the state of AI and what it means for the Future)-
(I think of all the possible Deus-Ex-Machina that could happen)-
(I imagine how terrible it is if I'm WRONG)-
(Visualizing my probabilities for the AI-doom scenario in hypothetical worlds where I don't live makes it easier, I think)
My probability of doom from AI is around 80% in the next 50 years.
(And my probability of Doom if AI keeps getting better is 95% (one reason it might not get better, I imagine, is that another X-Risk happens before AI)).
I would be surprised if more than 1 world, out of 5 in our current situation, made it out alive from developping AI.

Edit, a week after the post
I'd say my P(Doom) in the next 50 years is now between 20-40%.
It's not that I suddenly think AI is less likely, but I think I put my P(Doom) at 80% before because I lumped all of my fears together as if P(Doom) = P(Outcome I really don't like).
But those two things are different.
For me, P(Doom) = P(humanity wipes out). This is different than a bad outcome like [A few people own all of the AI and everybody else has a terrible life with 0 chance of overthrowing the system].
To be clear, that situation is terrible and I don't want to live there, but it's not doom.


So, my question:

What do AI researchers know, or think they know, that their aggregate P(Doom) is only at 5-10%?
 

I can see how many just flinch away from the current evidence or thought processes. They think the nicer thoughts.
But so many of them, such that their aggregate P(Doom) is sub-10%? 

What do they know that I don't?
- We'll need more computing power to run Doom-AI than we will ever have
(but human minds run on brains?)
- We don't need to worry about it now 
(which is a Bad Argument, it doesn't matter how far away it is, but how much resources (including time) we'll need, unless we'll need a not-yet-built-AI to help us build Aligned-AI... dubious.)
- Another AI Winter is likely
- ...

I THINK I know that AI researchers mostly believe horrible wrong arguments for why AI won't happen soon, why it won't wipe us out or deceive us, why alignment is easy, etc. Mostly: it's uncomfortable to think, it would hurt their ability to provide for their families as comfortably as they're currently doing, and it would put them at odds with other researchers / future employers.
But in case I'm wrong, I want to ask: What do they know that I don't, that they feel so safe about the future of the world with SAI in it?
 

***


I'm finishing reading Harry Potter and the Methods of Rationality for the third time. 
(Because Mad Investor Chaos hasn't been updated since Sept 2 Mhhhhhhh?!)
I'm having somewhat of an existential crisis. Harry gives himself ultimate responsibility. He won't act the ROLE of caring for his goals; he will actually put all of his efforts and ingenuity into their pursuit.
It's clear to me that I'm NOT doing that. I don't have a plan. I'm doing too many things at the same time. And maybe there's something I can do to help, given my set of skills, intelligence, motivation, hero-narrative, etc. 
I'm no Eliezer, I'm a terrible programmer with 0 ML knowledge and a slow-at-maths brain, but I reckon even I, if I trained myself at rationality, persuasion, influence, etc, could further the AI-Alignment agenda, possibly through people/political/influence means (over several years) rather than by doing anything technical.

Atm I'm trying (and failing) to Hold Off on Proposing Solutions™️, survey what I know and how I think I know it, look at my options, and decide how to move my life in a direction that could also lower the odds of P(Doom). 
I think I would like to be in a world where P(Doom) is 5%. Then, I'd probably think I'm not responsible for it. But I don't think I'm in that world. Just making sure though. 


EDIT a few days after the post
I found an interesting article, which seems well regarded within the industry - it's quoted by the Future Fund as "[a] significant original analysis which we consider the new canonical reference on [P(misalignment x-risk|AGI)]".
The article makes the same claims I did to begin with (see abstract)... 
But puts AI risk at >10%: Joseph Carlsmith — Is Power-Seeking AI an Existential Risk?

New to LessWrong?

New Answer
New Comment

2 Answers sorted by

Ivan Vendrov

Sep 24, 2022

61

I think a substantial fraction of ML researchers probably agree with Yann LeCun that AI safety will be solved "by default" in the course of making the AI systems useful. The crux is probably related to questions like how competent society's response will be, and maybe the likelihood of deceptive alignment.

Two points of disagreement though:

  • I don't think setting P(doom) = 10% indicates lack of engagement or imagination; Toby Ord in the Precipice also gives a 10% estimate for AI-derived x-risk this century, and I assume he's engaged pretty deeply with the alignment literature.
  • I don't think P(doom) = 10% or even 5% should be your threshold for "taking responsibility". I'm not sure I like the responsibility frame in general, but even a 1% chance of existential risk is big enough to outweigh almost any other moral duty in my mind.

Hey! Thanks for sharing the debate with LeCun, I found it very interesting and I’ll do more research on his views. 

Thanks for pointing out that even a 1% existential risk is worth worrying about, I imagine it’s true even in my moral system, if I just realize that ie 1% probability that humanity wipes = 70 million expected deaths (1% of 7 billions) plus all the expected humans that wouldn’t come to be. 

That’s logically. 

Emotionally, I find it WAY harder to care for a 1% X-risk. Scope insensitivity. I want to think about where else in my thinking this is causing output errors. 

Noosphere89

Sep 24, 2022

00

To be blunt, I'd argue selection effects plus vested interests in AGI happening would distressingly explain a large portion of the question.

(A weaker version of this applies to the opposite question "Why do AI Safety people have high probability of doom estimates?" There selection bias would account for at least a non-trivial portion of the reason this is true.)

3 comments, sorted by Click to highlight new comments since: Today at 10:22 PM

in the minds of people like Eliezer Yudkowsky or Paul Christiano, we're more likely doomed than not

My impression for Paul is the opposite – he guesses "~15% on singularity by 2030 and ~40% on singularity by 2040", and has said "quantitatively my risk of losing control of the universe though this channel [Eliezer's list of lethalities] is more like 20% than 99.99%, and I think extinction is a bit less less likely still". (That said I think he'd probably agree with all the reasons you stated under "I personally lean towards those latter views".) Curious to know where you got the impression that Paul thinks we're more likely doomed than not; I'd update more on his predictions than on nearly anyone else's, including Eliezer's. 

My view of PC's P(Doom) came from (IIRC) Scott Alexander's posts on Christiano vs Yudkowsky, where I remember a Christiano quote saying that although he imagines there'll be multiple AI competing as opposed to one emerging through a singularity, this would possibly be a worse outcome because it'd be much harder to control. From that, I concluded "Christiano thinks P(doom) > 50%", which I realize is pretty sloppy reasoning. 
I will go back to those articles to check whether I misrepresented his views. For now I'll remove his name from the post 👌🏻

You might have confused "singularity" and "a singleton" (that is, a single AI (or someone using AI) getting control of the world)?