Appreciate your take here, Habryka.
I agree it’s clear the AI Action Plan doesn’t reflect most of your priorities. A naive observer might say that’s because the accelerationists spent hundreds of millions of dollars building relationships with the group of people in charge of Congress and the White House, while the safety crowd spent their money on Biden and failed computer science research (i.e. aligning AI to “human values”).
From this position of approximately no political capital, one interpretation of the AI action plan is that it’s fantastic safety concerns got any concessions at all.
Here’s what a cynical observer might say: the government is basically incompetent at doing most things. So the majority of the accelerationist priorities in the plan will not be competently implemented. It’s not like the people who understand what’s going with AI would choose slogging through the interagency process over making tons of money in industry. And even if they did, who’s going to be more effective at accelerating capabilities? Researchers making $100mil at Meta (who would be doing this work regardless of the Action Plan), or government employees (well known for being able to deliver straightforward priorities like broadband access to rural America)?
The cynical observer might go on to say: and that’s why you should also be pessimistic about the more interesting priorities being implemented effectively.
But here is where optimistic do-gooders can step in: if people who understand AI spend their precious free time thinking as hard as possible about how to do important things--like making sure the US has an excellent system to forecast risks from AI--then there's a possibility that good ideas generated by think tanks/civil society will be implemented by the US Government. (Hey, maybe these smart altruistic AI people could even work for the government!) I really think this is a place people might be able to make a difference in US policy, quite quickly.
Zvi, I disagree. I think there is no positive value to you taking a position on Trump admin rhetoric being good or bad. There are a zillion people commenting on that.
Your comparative advantage is focusing on specifics; what does the AI action plan instruct federal agencies to do? How can the most important priorities be implemented most effectively? Who is thinking clearly about this and has a plan for e.g. making sure the US has an excellent federal response plan when warning shots happen? Signal boosting smart effective policies/policy thinkers is how you can best use your platform. Don't contaminate your message with partisanship.
Wading into "Trump is good/bad" discourse isn't going to change who wins the election. People you know already spent ridculous amounts of money on this, and were crushed. Time to ignore politics and focus on policy: specifically, complicated tech policy that most people don't understand, that you have the ability to shape.