Background

Most tech-savvy people today see AI as a cool tech trend like the internet / smartphones / crypto / virtual reality. The claim "AGI is pretty likely to kill you and your loved ones in your lifetime" has never been a belief they take seriously, or think that many other people take seriously. They also don't intuitively perceive AGI as a risk that's relevant in their own lifetime; only as a low-probability long-term risk like an asteroid impact.

The Needed Resource

Something on the internet they can read/watch to try to guide such a person to the point where they realize maybe AI existential risk is a real urgent thing, at least as much so as climate change. Someone who gets that they might be ringing in the new decade as a nonhuman configuration of atoms.

Benefit of Having This

The benefit is that a significant fraction of the people who get AGI risk urgency will be motivated to do something helpful to the cause, much more so than the fraction of people who don't get AGI risk urgency.

Some Candidate Resources

Two off the top of my head:

[WaitButWhy] The AI Revolution: Our Immortality or Extinction

[LessWrong] AGI Ruin: A List of Lethalities

The WaitButWhy post does a great job of being generally accessible, while List of Lethalities packs in more of the arguments why the danger level seems high. It seems like there's a gap for a resource that's more suitable to the task of making a normal person realize this is a real and urgent thing, despite the social Overton Window not yet giving them that message.

If there's no good resource, I might be up for working on something myself. Maybe a good format would be similar to that of a technical documentation site, where it's kind of a linear walkthrough but also lets you skip around to sections you're interested in.

New Answer
New Comment

4 Answers sorted by

trevor

Oct 16, 2022

203

I'd go with the AI as a pos neg factor in global risk paper from 2006, it would be even better if distilled (even though it's already around as indimidatingly-long as the resources you proposed).

But the elephant in the room for first-time explanation is that concepts have to be timed strategically to be mindful of the absurdity heuristic, not just vomited out in an order ultra-optimized for epistemic flow (like computer code) instead of the cultural mentality of the receiver.

This contest attempted to help with that by creating persuasive soundbytes, and there's another one in the works for executive summaries. But ultimately I think that the lessons learned from talking to academics has the best insights on the big bottlenecks here.

Lowell Dennings

Oct 16, 2022

71

ThomasW recommended [1] Unsolved Problems in ML Safety, [2] X-Risk Analysis for AI Research, and [3] Is Power-Seeking AI an Existential Risk? He said [3] is good for people with high openness to weird things and motivation for x-risk. And if they're not as open, [1] has research areas and ML cred. He says he wouldn't share Yudkowsky stuff to ML people, and people don't really like openings with x-risk and alarmism. Personally, I like "Is Power-Seeking AI an Existential Risk" because of the writing style and it's a pretty comprehensive introduction. There's also a bounty for AI Safety Public Materials.

quanticle

Oct 16, 2022

3-1

Wasn't that one of the primary purposes of The Sequences? The fact remains that a generally smart, tech-savvy person is still going to have a lot of inbuilt biases around the nature of intelligence that they're going to have to unlearn before they can fully accept the orthogonality thesis, and understand that an AI that is smart won't automatically be good.

Edit: Another good resource is Superintelligence: Paths, Dangers Strategies, but that might be an even tougher recommendation than The Sequences.

The orthogonality thesis is already natural accessible and obvious: we know about highly intelligent sociopaths, the 'evil genius' trope, etc. The sequences are flawed and dated in key respects concerning AI, such that fresh material is probably best.

2quanticle2y
The orthogonality thesis is not as intuitive or as accessible as you think it is, and you have demonstrated that yourself with your references to "intelligent sociopaths" and "evil geniuses". An out of control superintelligent AI is not an evil genius. It is not a sociopath. It is a machine. It's closer to a hurricane or a tsunami than it is to anything that resembles a human. Sociopaths, evil geniuses and the like are human. Broken, flawed humans, but still recognizably human. An AI will not be. It will not have human emotions. It might not have emotions of any kind. As Eliezer put it, "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

The orthogonality thesis is not as intuitive or as accessible as you think it is, and you have demonstrated that yourself with your references to "intelligent sociopaths" and "evil geniuses". An out of control superintelligent AI is not an evil genius. It is not a sociopath. It is a machine. It's closer to a hurricane or a tsunami than it is to anything that resembles a human.

Hurricanes and tsunamis don't think; humans do, so actual AGI is much closer to a human (super obvious now: GPT-3, etc).

An AI will not be. It will not have human emotions. It might not have emotions of any kind. As Eliezer put it,

If your model of AI comes from reading the sequences, it was largely wrong when it was written, and is now terribly out of date. The likely path to AI is reverse engineering the brain, as I (and many others) predicted based on the efficiency of the brain and tractability of its universal learning algorithms, and demonstrated by the enormous convergent success of deep learning.

-2[comment deleted]2y

Adam Zerner

Oct 16, 2022

20

To me what would make the most sense is some sort of survey of the opinions of experts. It's a very difficult thing to form an opinion on from first principles. And looking at expert opinion is what we usually do in other scenarios.

For example, I have had Achilles tendinitis for about eight years now. Recently the doctor proposed PRP injections or stem cell therapy. To form an opinion of whether this is worth it, I could have looked at it from first principles, digging into the biochemistry, but that just doesn't seem very practical. It'd require an amount of expertise that I don't have and would take an unreasonable amount of time to acquire, even for me as someone who has a degree in neuroscience.

Instead, what would be better would be to look at the opinion of other experts. Or even more conveniently, read a distillation of those expert opinions by someone who I trust. Which I found, and which allowed me to adopt some average of those expert opinions as my own.

But even if that distillation of expert opinions existed, there are still some roadblocks. With AI, we're talking about something that sounds crazy, science fiction-y and naive. We're talking about robots having an intelligence explosion where they end up turning the universe into paperclips, and we're talking about this happening in something like 10-50 years. Even if there was a distillation of expert opinion expressing that experts say it's legit, it takes a special type of memetic immune disorder to believe something like this, and the "typical smart tech-savvy person" does not have this. Instead, they have "antibodies" protecting themselves from such beliefs, and furthermore, from taking ideas seriously.

Well, maybe. Maybe if there truly was a large scientific consensus, that would be enough to fight off those antibodies and the "typical smart tech-savvy person" would "get it". But we currently don't have anything close to such a consensus. From what I understand, the consensus is quite narrow. It's mostly just AI safety researchers, and of course people who go into that as a career would have such a belief, just like how people who research underwater basket weaving believe that underwater basket weaving is extremely, extremely important. If you expand into AI professionals more generally, my understanding is that the fear is much smaller. Same with if you expand into machine learning, software, STEM, and then smart people across other fields. If there was a strong consensus amongst, let's say STEM professionals, that AGI risk is an urgent problem, then I'd probably assume that "typical smart tech-savvy people" would adopt that belief as well. But we aren't at that point. Right now I think the consensus is too narrow to kill off the antibodies.

And that might be justifiable. I'm not sure. Personally I have spent a long time in the rationality community and developed a very high level of epistemic trust for people in the community, especially the ones at the top, and this is what allows me to update my beliefs hard about AI as a tremendously serious existential risk. If we imagine a different scenario where, say, astronomers believe that a black hole is going to swallow the entire universe some time in the next 10-50 years, but physicists more generally aren't too concerned, and scientists and other smart people more generally are basically unconcerned, well, I'm not sure what I'd think. Maybe the astronomers are taking things too far. Maybe outsiders are being ignorant. I dunno. Given the magnitude of "destroy the universe" and my level of solid epistemic respect for astronomers, I'd probably have to look into it more closely, but I'm skeptical that the "typical smart tech-savvy person" would have that instinct.

Good point. I think the reference I’m imagining should include a section on “what do the experts think?” And show how a significant number of very smart experts think this and there’s arguably a trend toward more and more of them agreeing.

I still think most of the resource should be presenting arguments themselves because I think most people in tech largely convince themselves on their own intuitive arguments like “AI just does what we tell it” or “a smart enough AI won’t be mean”.