This is a concern I take seriously. While it is possible increasing awareness of the problem of AI will make things worse overall, I think a more likely outcome is that it will be neutral to good. I will strive to do justice to the positions and concerns people in the community have (while understanding that there is disagreement within the community).
A few thoughts on this:
a) I expect that addressing public misconceptions about AGI would be good.
b) I think it's important to try to explain some of the challenges of policy action and why it's very hard to figure out policies that increase our chances of success[1]. I'd also emphasise the importance of attempting to develop a nuanced understanding of these issues as opposed to other issues where it is easier to dive in and start doing things. In particular, I'd explain how certain actions have the potential to backfire, even if you don't think they actually would backfire. And I would consider mentioning the unilateralists curse and how producing low-quality content can cost us credibility.
I think it would be possible for this project to be net-positive, but you'd have to navigate these issues carefully.
Even increasing research funding might not do any good if it mostly ends up elevating work that is a dead-end or otherwise dilutes those doing valuable work.
Thanks for the comment. I agree and was already thinking along those lines.
It is a very tricky, delicate issue where we need to put more work into figuring out what to do while communicating it is urgent, but not so urgent that people act imprudently and make things worse.
Credibility is key and providing reasons for beliefs, like timelines, is an important part of the project.
Yes, there are! Superintelligence, Human Compatible, Precipice, Alignment Problem, Life 3.0 etc. all provide high quality coverage in different ways. But most of them are not intended for a general audience.
The piece of writing that comes to my mind as currently being best for a general audience is Wait But Why's The AI Revolution. It's not a book though.
Nice! Feel free to use Stampy content, I hereby release you of the BY part of our copyright so you can use it without direct attribution. We're a project by Rob Miles's team working to create a single point of access for learning about alignment (basics, technical details, ecosystem, how to help, etc). By the time you publish we might be a good enough resource that you'll want to point your readers at us at the end of your book for further reading.
It seems like it could be worthwhile for you to contact someone in connection to AGI Safety Communications Initiative. Or at the very least check out the post I linked.
This is a concern I take seriously. While it is possible increasing awareness of the problem of AI will make things worse overall, I think a more likely outcome is that it will be neutral to good.
Another consideration is how it may be a risk for long-termists to not pursue new ways of conveying the importance and challenge of ensuring human control of transformative AI. There is a certain principle of being cautious in EA. Yet in general we don't self-reflect enough to notice when being cautious by default is irrational on the margin.
Recognizing the risks of acts of omission is a habit William MacAskill has been trying to encourage and cultivate in the EA community during the last year. Yet it's been a principle we've acknowledged since the beginning. Consequentialism doesn't distinguish between action, and inaction, as a failure to take any appropriate, crucial or necessary action to prevent a negative outcome. Risk aversion is focused on in the LessWrong Sequences more than most cognitive biases.
It's now evident that past attempts at public communication about existential risks (x-risks) from AI have altogether proven to be neither sufficient nor adequate. It may not be a matter of not drawing more attention to the matter so much as drawing more of the right kind of attention. In other words, carefully conducing changes in how AI x-risks are perceived by various sections of the public is necessary.
The way we together as a community help you ensure how you write the book strikes the right balance may be to keep doing what MacAskill recommends:
- Stay in constant communication about our plans with others, inside and outside of the EA community, who have similar aims to do the most good they can
- Remember that, in the standard solution to the unilateralist’s dilemma, it’s the median view that’s the right (rather than the most optimistic or most pessimistic view)
- Are highly willing to course-correct in response to feedback
No offense, but It's not obvious to me why communicating to a general audience could be a net positive. Exactly how do you expect this to help?
None taken, it's a reasonable question to ask. It's part of the broader problem of knowing if anything will be good or bad (unintended consequences and such). To clarify a bit, by general audience, I don't mean everyone because most people don't read many books, let alone non-fiction books, let alone non-fiction books that aren't memoirs/biographies or the like. So, my loose model is that (1) there is a group of people who would care about this issue if they knew more about it and (2) their concerns will lead to interest from those with more power to (3) increase funding for AI safety and/or governance that might help.
Expanding on 1, it could also increase those who want to work on the issue, in a wide range of domains beyond technical work.
It's also possible that it is net-positive but still insufficient but was worth trying.
I'd be happy to offer free editing services as my time allows, and maybe the research assistant role if you could describe what you need or anticipate needing a bit more. I have a BS in statistics and have worked in AI for a few years.
The TLDR is the title of this post.
Hi all, long time EA/Rationalist, first time poster (apologies if formatting is off). I’m posting to mention that I’m 30,000 words into a draft of a book about the threat of AI written for a general audience.
People who read this forum would likely learn little from the book but it would be for their friends and the larger group of people who do not.
Brief FAQ:
Q: What’s the book about? What’s the structure?
A: Summarizing the book in a few short sentences: Artificial Super Intelligence is coming. It is probably coming soon. And it might be coming for you.
Structurally, the initial chunk is making the case for AGI/ASI happening at all; happening soon; and not obviously being controllable. In short, the usual suspects.
The next chunk will be a comprehensive list of all the objections/criticisms of these positions/beliefs and responses to them. The final chunk explores what we can do about it. My goal is to be thorough and exhaustive (without being exhausting).
Q: Why should this book exist? Aren’t there already good books about AI safety?
A: Yes, there are! Superintelligence, Human Compatible, Precipice, Alignment Problem, Life 3.0 etc. all provide high quality coverage in different ways. But most of them are not intended for a general audience. My goal will be to explain key concepts but in the most accessible way possible (eg discuss the orthogonality thesis without using the word orthogonal).
Second, the market craves new content. While some people read books that are 2-10 years old, many people don’t, so new works need to keep coming out. Additionally, there have been so many advances recently, some coverage quickly becomes out of date. I think we should have more books come out on this urgent issue.
Q: Why you?
A: a) I have 14 years of experience explaining concepts to a general audience through writing and presenting hundreds of segments on my podcast The Reality Check;
b) I also have 14 years experience as a policy analyst, again learning to explain ideas in a simple, straightforward manner.
c) I’m already writing it and I’m dedicated to finishing it. I waited until I was this far along in the writing to prove to myself that I was going to be able to do it. This public commitment will provide further incentive for completion.
Q: Are you concerned about this negatively impacting the movement?
A: This is a concern I take seriously. While it is possible increasing awareness of the problem of AI will make things worse overall, I think a more likely outcome is that it will be neutral to good. I will strive to do justice to the positions and concerns people in the community have (while understanding that there is disagreement within the community).
Q: Do you need any help?
A: Sure, thanks for asking. See breakdown of possibilities below.
a) If anyone is keen to volunteer as a research assistant, please let know.
b) I’ll soon start looking for an agent. Anyone have connections to John Brockman (perhaps through Max Tegmark)? Or other?
c) If smart and capable people want to review some of the content in the future when it is more polished, that would be great.
d) I’m waiting to hear back about possible funding from the LTFF. If that falls through, some funding to pay for research assistance, editors/review, book promotion, or even to focus my time (as this is a side project) would be useful.
Q: Most books don’t really have much impact, isn’t this a longshot?
A: Yes. Now is the time for longshots.