“We don’t want to be to AI what dogs are to humans” My congress woman at the end of the meeting.
Ironically humans basically becoming pets is one of the most positive outcomes in the general discourse. That only shows more that we need a better view of what we want, and what normal people would want with these changes.
I personally live in the Netherlands, and i am not really sure what good policy for the Netherlands and the EU would be. From the view of agency I will try to find people to talk with about this
A good policy for those nations would be to loudly tell the US and China "don't build super-intelligence, that is insanely risky". Those voices matter because they have authority.
That creating super-intelligence is very risky? As described above, plus describe the distribution of risk beliefs among alignment experts.
I think that one other message that would carry a lot of weight (particularly with x-risk skeptical politicians in middle powers) is that even if AI proves to be corrigible they will be left out by default. A world in which the U.S. or China has incredibly advanced AI systems is a world in which other powers have no power.
Yes, good point.
Unfortunately, that logic will also push toward the US and China dismissing middle powers' concerns as motivated by self-interest.
One weird trick to dramatically cut AI X-risk in 30 minutes.
So, you're concerned about AI going well and benefiting humanity? Or maybe more aptly, you're concerned that by default AI won't go well and will lead to extinction (or will lead to a panoply of other not good outcomes). And being a rational, discerning Lesswrong reader you also likely want to have a real impact on this.
The most optimal choice may be working on AI alignment, but for those of us in the 99.99% that don't have the technical skills or theoretical understanding to make an impact - this is for you.
For a long time I have had enormous anxiety about AI X-Risk. People in my life would tell me "there's nothing you can do about it, so why worry". I would argue that they were half right. Anxiety is pointless unless it leads to action. I am by default a spiteful person, I loathe being told I can't have an impact, and to be quite honest I enjoy proving others wrong so I set out to try and do something.
After
basically noconsideration, I determined that the most likely (and direct) path to impact would come through discussing AI with policy makers, and so far I have had (what I would consider to be) enormous impact for relatively little time investment. I've had the opportunity to:Here's a list of tips I've found work extraordinarily well. (I will admit this is U.S. centric but I think things are similar in other democratic countries)
Just do it
First and most importantly, do not overthink it. Just go to your local representatives website (state, local, federal). Aim high (but also don't discount staffers, they have a huge level of importance on how an individual representative thinks about things). The number one reason people fail to have impact is they fail to act. Don't be the person who spends 2 months planning and doesn't end up acting.
Bonus Tip: If you know people with political connections this can also work. Often wealthy people donate to campaigns.
Bonus Tip 2: If you struggle at the federal level, state representatives are often very easy to contact and also have huge influence. A state law being passed can also impose obligations on AI providers.
Be Credible
If you want to maximize impact, showing up for a meeting with a policy maker requires a certain degree of seriousness. Introduce yourself and any relevant credentials, dress professionally, be the kind of person they are going to take seriously.
Policy makers (like everyone else) use heuristics to judge who they are talking to. Wearing a fedora, having goofy facial hair, dressing in a graphic t-shirt all damage your credibility. (I'm not saying these things are bad, just that they will harm your ability to achieve an outcome)
If you have any credentials that relate to AI, use those. If not reference your direct experience (for example, if you work in tech and AI is, in your experience, resulting in less need to hire use that!). You need to credibly establish two things:
Don't assume they AI Experts
Most people have no understanding of AI. Their experience with it may have had a beginning and ending two years ago when they tried out GPT4 and it lied to them. If you jump straight into the Orthogonality thesis you've already lost. I've found the simplest narrative that resonates (or at least gets me the most head nods is)
Use analogies, find ways to explain complex topics very simply while also making sure they understand you are an expert (which if you read Lesswrong much, you are more of an expert than almost anyone they would talk to in a given month so don't sell yourself short). Don't undersell your fears. Worried about 50% unemployment? Tell them that. Worried about extinction risks from AI? Tell them that (but maybe don't jump straight there at first, build some credibility in the conversation).
Be Concrete
Politicians are busy. They have many meetings. Make your policy proposals concrete and actionable. If you say "we should regulate AI" that's not very useful or helpful.
The specific policy proposals you would recommend should be written down and sent to the congress persons staff after the meeting. The more specific (and copy pasteable into a law being written in a word doc) the better.
Make Asks
You would be amazed how often just asking for something works. At the end of the call I had with my congressperson, I asked "Hey is there anyone else you would recommend I meet with?" She listed out a senator and two other influential congress people. I then asked "Would it be possible to get an introduction for the meeting?" which she happily instructed her legislative affairs staffer to do. Just asking for things is a superpower few people realize they have.
Follow Up
By this point you should have the emails of congressional staffers. Follow up with a written document outlining your meeting. Follow up again later with more information. Ask for more introductions, if you impressed them they will likely help you.
Ironically I find that people who don't work in tech often find AI risk far easier to comprehend than people who use it often and are deep in X. Using simple logical arguments of mutual interest works wonders.
“We don’t want to be to AI what dogs are to humans” My congress woman at the end of the meeting.