I will start by saying that I generally agree with Yudkowsky's position on AI. We must proceed with extreme caution. We must radically slow down AI capability advancement. We must invest unfathomable amounts of resources in AI alignment research. We need to enact laws and treaties that will help keep it all together for as long as possible and hopefully we figure things out in time.

 

The laughter at the recent White House press conference at the question about Yudkowsky's argument, indicates far we are in public debate from a sensible position of caution. 

But I am hopeful that we can change that. Few people laugh at nuclear weapons now. We are a species capable of cooperation and of taking things seriously. As the saying goes:

"First they ignore you, then they laugh at you, then they fight you, then you win."

 

What is missing is public understanding of the dangers of misaligned / unaligned AI. Democracy does not work in darkness. People must know the dangers, the uncertaintly, and of ways they can contribute.

That's why I am proposing a campaign on public awareness of x-risk from AI. So far, it's just me and my wife. Please join me, especially if you work in advertising, marketing, PR, activism, politics, law, etc., if you know how to make a website, if you want to create PR materials, meet journalists, do accounting, fund-raising, etc. 

Please share this with people who do not read Less Wrong but we are freaked out and want to do something.

I do not know exactly how this campaign will run, which countries to focus on. I am myself only human and can contribute very little of the total required effort. My background is in consulting and market research and I run a market research company. Personally, at this stage, I can best contribute by coordination and facilitating operations.

We need people, money, expertise, patience, etc. Please join: https://campaignforaisafety.org/

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 3:12 AM
[-]Ruby1y1816

I'd urge a lot of caution here. If you've recently updated that AI is a very big deal, then it can feel like you have to do something right now. But there can be both upside and serious downside if you do this wrong. There are a lot of people thinking about this, so I'd work to get to them and coordinate rather than succumbing to unilateralist's curse. 

Thank you for your words of caution @the gears to ascension , @Ruby , @Chris_Leong 

Indeed I have just recently updated on AI. I lived happily believing AGI was just nonsense after seeing gimmick after gimmick and slow progress on anything general. This all came down as a rude shock a couple of weeks ago.

I will heed your advise on consulting with others.

I am however of the firm opinion that AI alignment is not going to be solved any time soon. The best thing is just to shuit progress on new capabilities down indefinitely. I do not see it being done without the force of law and politics will inevitably be at play.

Don't let your firm opinion get in the way of talking to people before you act. It was Elon's determination to act before talking to anyone that led to the creation of OpenAI, which seems to have sealed humanity's fate.

I think that this is misleading to state it that way. There were definitely dinners and discussions with people around the creation of OpenAI. 
https://timelines.issarice.com/wiki/Timeline_of_OpenAI 
Months before the creation of OpenAI, there was a discussion including Chris Olah, Paul Christiano, and Dario Amodei on the starting of OpenAI: "Sam Altman sets up a dinner in Menlo Park, California to talk about starting an organization to do AI research. Attendees include Greg Brockman, Dario Amodei, Chris Olah, Paul Christiano, Ilya Sutskever, and Elon Musk."

Thanks, that's useful. Sad to see no Eliezer, no Nate or anyone from MIRI or having a similar perspective though :(

I'd encourage you to consult widely with people in AI Safety/governance before running a large public awareness campaign. The AI Safety Governance course is likely a good place to start in terms of skilling up/better understanding this issue. I think it's possible for a public relations campaign to move the needle, but it's also very important to guard against downside risks and to think very carefully and strategically about the path to impact.

For example, if we ask for the government to sponsor research, how do we ensure the money actually goes towards alignment rather than people who just frame their research in terms of alignment?

Or for example with "critical decisions regarding AI safety must not be taken by small groups of AI researchers"? I agreed we would like to avoid a small group of researchers making decisions without consulting with anyone else, but at the same time, I'd much rather have decisions made by researchers than by politicians would most likely be clueless and too focused on appearance rather than substance.

Have you looked at the ai alignment fieldbuilding tag at all? It seems to me likely that the approach you're using will result in engaging with standard political information flows in ways that activate unintelligent parts of people's fact evaluation. Your immediate steps are instrumental steps which campaigns often use, and it's not obvious at all that your campaign is ready to succeed. I am, in general, enthusiastic about networking and organizing, but not enthusiastic about campaigning for attention or advertising without a communicative goal. Your site doesn't seem inherently terrible, and it's quite possible I'm simply wrong. But this is my first impression.

To be clear, I'd love to see more folks thinking carefully about how to communicate issues in ways others can digest. I'm a big fan of this dude's video on soft language (text version) in how to communicate about intense topics in large-scale communication projects. So maybe the thing to take away from my comment is just that some rando in the field was slightly hesitant, rather than that I'm some sort of authority who can tell you your approach sucks. Try what you know how to do, after all.

The lack of names on the website seems very odd.

[-][anonymous]1y2-1

Thank you for doing this. I'm thinking that at this point, there needs to be an organisation with the singular goal of pushing for a global moratorium on AGI development. Anyone else interested in this? Have DM'd.