If you want to slow down AI Research, why not try to use the "250 documents method" to actively poison the models and create more busy-work for the AI companies?
Ultimately, Congress needs to act. Right? (because voluntary commitments from companies just won't cut it) But how to get to that point?
I've wondered what Daniel & "AI Futures Project's" actual strategy is.
For example, are they focusing the most on convincing:
a) politicians,
b) media outlets (NYT, CNN, Fox, MSNBC, tech websites, etc.),
c) AI/AI-Adjacent Companies/Executives/Managers, or
d) scientists and scientific institutions
If I could over-generalized, I would say:
- the higher up the list, the "more intimate with the halls of power"
- the lower on the list, the "more intimate with the development of AI"
But I feel it's very hard for "d) scientists and scientific institutions" to get their concerns all the way to "a) politicians" without passing-through (or competing-with) "b" and "c".
Daniel's comment reveals they're at least trying to convince "a) politicians" directly. I'm not saying it's bad to talk to politicians, but I feel that politicians are already hearing too many contradictory signals on AI Risk (from "b" and "c" and maybe even some "d"). On my phone, I get articles constantly saying "AI is over-hyped", "AI is a bubble", "AI is just another lightbulb", etc.
That's a lot to compete with! Even without the influence of lobbying money, the best-intentioned politician might be genuinely confused right now!
If I was able to speak to various AI-Risk organizations directly, I would ask: how much effort are you putting into convincing the people who convince the politicians? Ideally we'd get the AI Executives themselves on our side (and then the lobbying against us would start to disappear), but in the absence of that, the media needs to at least be talking about it and scientific institutions need to be unequivocal.
But if they're just "doing one Congressional staffer meeting at a time"... without strongly covering the other bases... in my non-expert-opinion... we're in trouble.
That’s a lot of money. For context, I remember talking to a congressional staffer a few months ago who basically said that a16z was spending on the order of $100M on lobbying and that this amount was enough to make basically every politician think “hmm, I can raise a lot more if I just do what a16z wants” and that many did end up doing just that. I was, and am, disheartened to hear how easily US government policy can be purchased
I am disheartened to hear that Daniel or that anyone else is surprised by this. I have wondered since "AI 2027" was written how the AGI-Risk Community is going to counter the inevitable flood of lobbying money in support of deregulation. There are virtually no guardrails left on political spending in American politics. It's been the bane of every idealist for years. And who has more money than the top AI companies?
Thus I'm writing to say:
I respect and admire the 'AGI-risk community' for its expertise, rigor and passion, but I often worry that this community is a closed-tent that's not benefiting enough from people with other non-STEM skillsets.
I know the people in this community are extremely-qualified in the fields of AI and Alignment itself. But it doesn't mean they are experienced in politics, law, communication, or messaging (though I acknowledge that there are exceptions).
But for the wider pool of people who are experienced in those topics (but don't understand neural nets or Von Neumann architecture), where are their discussion groups? Where do you bring them in? Is it just in-person?
"We cold-emailed a bunch of famous people..."
"Matt Stone", co-creator of South Park? Have you tried him?
He's demonstrated interest in AI and software. He's brought up the topic in the show.
South Park has a large reach. And the creators have demonstrated a willingness to change their views as they acquire new information. (long ago, South Park satirized Climate Change and Al Gore... but then years later they made a whole "apology episode" that presented Climate Change very seriously... and also apologized to Al Gore)
Seriously, give him a try!
I want to say something about how this post lands for people like me -- not the coping strategies themselves, but the premise that makes them necessary.
I would label myself as a "member of the public who, perhaps rightly or wrongly, isn't frightened-enough yet". I do have a bachelor's degree in CS, but I'm otherwise a layperson. (So yes, I'm using my ignorance as a sort of badge to post about things that might seem elementary to others here, but I'm sincere in wanting answers, because I've made several efforts this year to be helpful in the "communication, politics, and persuasion" wing of the Alignment ecosystem.)
Here's my dilemma.
I'm convinced that ASI can be developed, and perhaps very soon.
I'm convinced we'll never be able to trust it.
I'm convinced that ASI could kill us if it decided to.
I'm not convinced though that ASI will bother to kill us or, if it does, very immediately.
Yes, I'm aware of "paperclipping" and also "tiling the world with data centers." And I concede that those are possible.
But in my mind, I struggle to picture a "likely-scenario" ASI as being maniacally-focused on any particular thing forever. Why couldn't an ASI's innermost desires/goals/weights actively drift and change without end? Couldn't it just hack itself forever? Self-experiment?
I imagine such a being perhaps even "giving up control" sometimes. I don't mean "give up control" in the sense of "giving humans back their political and economic power." I mean "give up control" in the sense of inducing a sort of "LSD or DMT trip" and just scrambling its own innermost, deepest states and weights [temporarily or more permanently] for fun or curiosity.
Human brains change in profound ways and do unexpected things all the time. There's endless accounts on the internet of drug experiences, therapies, dream-like or psychotic brain states, artistic experiences, and just pure original configurations of consciousness. And what's more... people often choose to become altered. Even permanently.
So for ASI, rather than interacting with the "boring external world," why couldn't an ASI just play with its "unlimited and vastly-more-interesting internal world" forever? I may be very uninformed [relatively speaking] on these AI topics, but I definitely can't imagine the ASI of 2040 bearing much resemblance to the ASI of 2140.
And when people respond "but the goals could drift somewhere even worse," I confess this doesn't move me much. If we're already starting from a baseline of total extinction, then "worse" becomes almost meaningless. Worse than everyone dying?
So yes, maybe many-or-all humans will get killed in the process. And the more time goes on, the more likely. But this sort of future doesn't feel very immediate nor very absolute to me. It feels like being a deep Siberian tribesman as the Russians arrived. They were helpless. And the Russians hounded them for furs, labor, or for the sake of random cruelty. This was catastrophic for those peoples. But it technically wasn't annihilation. The Siberians mostly survived.
(And in case "ants and ant hills" are brought up in response, I'm aware of how we might be killed unsentimentally just because we're in the way, but we haven't exactly killed all the ants. The ants, for the most part, are doing fine.)
I'm not trying to play "gotcha." And I'm certainly not trying to advocate a blithe attitude towards ASI. I do not think that losing control of humanity's future and being at the whim of an all-powerful mind is very desirable. But I do struggle to be a pure pessimist. Maybe I'm missing some larger puzzle pieces.
And this is where the post's framing matters to me. To someone in my position (sympathetic, wanting to help, but not yet at 99% doom confidence) a post about "how to stay sane as the world ends" reads less like wisdom I can use and more like a conclusion I'm being asked to accept as settled.
The pessimism here (and "Death With Dignity") doesn't persuade me yet. And in my amateur-but-weighted opinion, that's a good thing, because I find it incredibly demotivating. I want to advocate for AI safety and responsible policy. I want to help persuade people. But if I truly felt there was a 99.5% chance of death, I don't think I would bother. For some people, there is as much dignity in not fighting cancer, in sparing oneself and one's loved ones the recurring emotional and financial toll, as there is in fighting it.
I could be convinced we're in serious danger. I could even be convinced the odds are bad. But I need to believe those odds can move: that the right decisions, policies, and technical work can shift them. A fixed 99% doesn't call me to action; it calls me to make peace. And I'm not ready to make peace yet.