If you would come to very similar March but object to details of the current framing, please let me know in the comments, and consider registering your email for the "Keep me informed" checkbox without making the commitment.
There's a decent chance I would join for the March as is given that I directionally agree with its sentiment and its recommendation. But I don't agree with some of the "We believe..." statements, which sound like they are intended to speak for all of the people who came to the March.
I disagree with these:
We believe that if any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.
We do not mean that as hyperbole. We are not exaggerating for effect. We think that is the most direct extrapolation from the knowledge, evidence, and institutional conduct around artificial intelligence today.
This is stated quite confidently, implying >>50% on this, while I have less than 50%. Well maybe it could be over 50%, if there is a strict operationalization of what counts as remotely similar to current techniques and present understanding. In any case, I think I disagree with what most people would takeaway from this statement.
It's not useful for only one country to ban advancement of AI capabilities within its own borders. AI development would just keep happening in other countries by people who didn’t understand the dangers, until eventually someone somewhere built machines that were substantially smarter than any human.
This seems to imply that the US government could not on its own significantly decrease p(doom). That seems very wrong to me, implementing a slowdown for a few months to a year at the right moment seems like a huge deal. An international treaty would be better, but this seems too defeatist about domestic options.
Yeah I think (not speaking for MIRI) that the FAQ should rephrased so the vibe is more "here's what we believe, but, there's a bunch of reasons you might want to support this."
> It's not useful for only one country to ban advancement of AI capabilities within its own borders.
This seems to imply that the US government could not on its own significantly decrease p(doom).
I think my personal beliefs would say "it's not very useful" or something. I think the "ban AGI locally" plan is dependent on a pretty specific path to be useful and I don't read the current phrasing as ruling out "One country Bans it and also does some other stuff in conjunction." (actually, upon reflection I'm not that confident I know what sort of scenario you have in mind here)
I think my personal beliefs would say "it's not very useful" or something. I think the "ban AGI locally" plan is dependent on a pretty specific path to be useful and I don't read the current phrasing as ruling out "One country Bans it and also does some other stuff in conjunction." (actually, upon reflection I'm not that confident I know what sort of scenario you have in mind here)
I think that a slowdown that is in the neighborhood of "ban AI development temporarily near but not after max-controllable AI" could potentially be very impactful. Banning AI development for long enough to allow China to pull ahead is less clear. I'm not sure what the intention of the sentence was, but to me it seems to imply that any domestic action on its own would be of very little use.
The thing I saw the sentence as doing is mostly clarifying "We're not naive, obviously just doing the naive thing here would not work, that's why we're not asking for it." (I think I agree that a US ban would be some-kind-of-useful but it feels way less politically viable to me, since it feels more like throwing away the lead for no reason to most people. I realize it may sound weird to think "banning in one country less viable than banning worldwide", but, I think the ban worldwide actually makes clearly makes sense in a way that banning l locally only maybe makes sense if you tune the parameters just right)
"ban AI development temporarily near but not after max-controllable AI"
I'm not sure I'm parsing the grammar here, wondering if you flipped the sign or I'm misreading. (it sounds like "AIs that are almost uncontrollable are banned, uncontrollably powerful AIs are allowed)
Went here to post something similar. My quick guess is that such a march would get a better fit of attendance times direction by expanding the scope a fair bit, but I realize that the creators have certain preferences here.
[on the fence about posting this comment, consider it low expected value [<--RFW?]]
Re/ legitimacy of large marches, I'd guess there's other additional ways to get legitimacy. Basically, stuff that normal marches have. E.g.
Flag proposal:
Reasons why I think this is a good flag:
Yeah when I was doing the graphics I considered a version where everyone was waving stop signs. It looked a bit weird as an illustration but I suspect would probably work in real life.
Solid idea! I'd worry a bit, around the edges, for a couple reasons:
Flag mock-ups based on the IABIED website icon (rough. I'm not a designer. Just trying to make it CGPGrey approved.):
1: Keep it simple: Something a child could draw
2: Make it distinct at a distance: No tiny details
3: Three colors or fewer
4: Symbols, colors and designs should mean something
5: Words on a flag: ideally zero
5b: Don't write your name on your flag
Large Square:
Smaller Square:
Square in context:
I wasn't a fan of this flag, but kind of like it when it's in the context of a real march. The shirts are probably the best bit though. I image the back of them to have a slogan on them.
Based of this Australian cave art hand stencil:
Halt (Hand Only):
Halt (Hand Stenciled):
Halt (Hand Stenciled, Wrist included):
Halt in Context:
I think these look kinda scary, in particular the black/red. White/red does feel more reasonable, although basically the original March page was constrained by the aesthetic of the book cover, and by the time there's serious effort invested in this I'd expect the aesthetic is going to get an overhaul that isn't as tied to the book.
it seems really rough to commit to attend something irl that will be scheduled at some random point in the future. even with 30 days of notice, the set of days that would make this feasible for me is still <50% of all days.
Yeah. On my end I'm like "well, this will be among the more important things I do that month/year, I expect to just actually be able to prioritize it over other existing plans."
I do think it'd be hypothetically nice to have a "80% likely to attend", or "I'mma make a good faith effort to attend" button, but it amps up the complexity of the page a bunch. (Realistically expect most people who are not rationalists* in practice who sign up to mean something more like this)
For now I'd just say "click the notify me" option, which I think is still a useful signal.
* or other flavors of "take their word abnormally seriously"
I believe I might have been the literal first person, outside of the lightcone team, to sign up for this. Also the second person, depending on their data validation, since I entered my name and email twice.
A lot can change between now and 100,000 pledges and/or human extinction. As of Feb 2026, it looks like this possible march is not endorsed or coordinated with Pause AI. I hope that anti-AI-extinction charities will work together where effective, and I was struck by this:
The current March is very centered around the book. I chose the current slogan/design expecting that, if the March ever became a serious priority, someone would put a lot more thought into what sort of slogans or policy asks are appropriate. The current page is meant to just be a fairly obvious thing to click "yes" on if you read the book and were persuaded.
My personal guess (not speaking for MIRI) is a protest this large necessarily needs to be a bigger tent than the current design implies, but figuring out the exact messaging is a moderately complex task.
It seems like Pause AI have put at least some thought into this moderately complex task. They also are building experience organizing real world protests that MIRI doesn't have as far as I know. A possible implication is that MIRI thinks that Pause AI is badly run, and would rather act alone. Or that Pause AI thinks MIRI is badly run. Or MIRI is not investing the time in trying to organize endorsements until they have more pledges. Or something else.
I'm skeptical of this take:
Marches can be very powerful if they’re large, but can send the wrong message if they’re small.
The first protest of "School Strike for Climate" was a single 15yo girl, Greta Thunberg. Obvious bias is obvious. But it probably wasn't going to send the wrong message as a small protest - if it had gone nowhere then I would never have heard about it. If tiny marches were sabotaging then I would expect more fake flag marches intended to have sparse attendance. Instead, I think small events don't send any mass message, and potentially have other value.
Edit: after posting this I saw Raemon's thoughts on this point, which I think address it.
MIRI was for many years dismissive of mass messaging approaches like marches. I wonder if this page is about providing an answer when people ask questions like "if you think everyone will die, why aren't you organizing a march on Washington?", rather than being a serious part of MIRI's strategy for reducing AI risk. It doesn't seem especially aligned with MIRI Comms is hiring (Dec 2025), which seems more focused on persuasion than mobilization.
Disclaimers: This is observations, not criticism. I have organized zero marches or protests.
"Don't build it" is short for "Don't built it yet" or "Don't build it before it's proven that doing so will not cause extinction" or something, else, right? That is, Y&S say in IABIED that they prefer that ASI gets built eventually. I'm not sure if this nuance should be included on the landing page, but if there's a simple way to include it that it probably should be added.
I suspect that you'd get more people to pledge to attend if the messaging encouraged everyone who agrees with "Don't Build It" to pledge, rather than just the small subset that believes IABIED. (I pledged despite disagreeing with IABIED, but I suspect that "we believe" IABIED messaging would discourage many who otherwise favor not building ASI to pledge.
I wish there were some discussion about the location. Why Washington DC? Aren't most AI Safety aligned people in the Bay Area and London (and a little bit in continental Europe)? If we want to get 100000 people ASAP I feel any of the following locations would be better than Washington, DC:
Bay Area - already a huge base of people to come, I bet you would get a few thousands registrations immediately
NYC - just a lot of people who may have read IABIED and get the message
London/European capital - in current situation, it is easier for American citizens to travel to Europe than the opposite
(again not MIRI, just sharing my own models and understanding)
The whole point is to send a message to DC people, and by the time we're talking about hitting 100,000 I don't think being in the Bay Area helps that much.
It wouldn't help p(send message to DC) but I do think it would help p(100,000 pledges).
(-- edit --): After further discussion with Raemon I now disagree with this comment. I think changing the location to SF would help get more pledges sooner, but that isn't the goal, the goal is to actually get to 100,000 and to actually have an impactful protest, and I suspect that more people total would be willing to travel to DC for a protest since it makes more sense to hold it there.
I think the effect there would be pretty minimal, there's not more than a few thousand people in any given city that likely to show up. It'd be weird to ask 90k people to travel to San Francisco, since that doesn't send a particular message about international treaties. (You might run a different protest that is the "please AI companies, unilaterally stop", but, I don't actually think that protest makes much sense)
I see where you are coming from. It's like there's three factors here:
I agree DC makes sense for the first function. I don't really know too much about global policy makers, but I guess lots of important ones hang out in DC?
As for the second function, I think it probably is higher in SF than DC, though I'm not that confident about that and you are correct to point out, that isn't necessarily the most important thing, since most of the people for a 100,000 person march will need to have travelled to get there. So from that perspective, the third function is much more important. What location is most likely to cause people to travel to it...
So I can only conclude... we should actually not all march on DC, but some pleasant vacation destination known to be popular with affluent people... that will really draw in the fence sitters! Well... I'm mostly joking about that. It probably messes up the first function, which is still basically the whole point of holding a march.
I think the number of people who would come to the march in their city on the weekend/evening of the work day is significantly higher than the number of people who would travel for it cross-country.
I think 100k march would be a sign for policy makers and get to the news anyway, whether it in NYC, SF, Washington or in the middle of the desert.
Also, IMO, it would make more sense to start with a lower threshold - 10k for example.
I mentioned in my reply to Raemon how this can be seen as three factors:
And we could maybe do better by drawing people in if the location scores highly both for function 1 and function 3. So maybe trying to focus on central locations that are easy to travel to, and either cheap, or pleasant to travel to. I think focusing on cheap and central, maybe a town where a big data center has been built would send a good message. On the other hand, focusing on pleasant, maybe vacation destinations popular with policy people is the way to go. Tropical beaches where people from around the world go to enjoy the weather spring to mind. I don't know if this line of thinking is fruitful or not. Probably DC is the conventional location to stage a mass protest for a reason, but maybe not.
On the other hand, I think there is something to the idea of having distributed protesting... Maybe lots of smaller thresholds for individual cities, and then some threshold for number of cities. It's getting a bit complicated, but would it still send a message if a significant population protested in a significant number of cities around the world?
I completely disagree. It will mobilize supporters, get to the news, and attract attention. The next march may attract 15k, the march after that 20k etc.
Example: during the protests in Moscow, Russia in 2011 after the electoral frauds, the first big rally gathered 50k-100k. The second gathered 100k-200k, since people saw - it is totally fine to come to such rallies.
To put in other words: if your goal is 100k march in Washington, DC, I think an intermediate 10k march in SF would increase the chances to achieve this goal.
Okay I think I don't stand by my previous statement. More like, I expect that overall process to be a lot more expensive than just going for a big protest off the bat. Obviously yeah there's a more common pattern of escalating groundwork and smaller protests.
But, I think this is dramatically more expensive, to the point where it doesn't seem worth my time, in a way that just going straight for the big protest does.
I don't really have that much confident it's possible to get a big protest off the bat. But, I think there is a discrete step-change in "you got the AI safety folk to all show up once" and "you got a substantial fraction of mainstream support." Once you're trying to do the latter, the SF benefit just seems very low to me.
The mechanism by which I'd try to hit 10k numbers involves starting from scratch recruiting a lot of people, at which point I might as well just start in DC. A crux is that I expect a 100k protest to involve similar amounts of work as a 10k protest, and requires calling in favors from famous people that are very expensive and I don't want to have to call in twice
(I also not your Russia example starts with a 50k-100k protest, which is already a different league)
Some reasons I'm more bullish on "just go for a big protest right off the bat."
I agree with your points and think they're valuable to point out and focus on.
Another thing I got wondering about is the size threshold. Like, I'm willing to drop what I'm doing and fly to DC for a 100k protest because of how much of an impression I think that would make, but I don't feel like I would do that for a 10k protest. Some of that could be anchoring bias, and it's hard to say how much, but there's definitely interesting dynamics in choosing the threshold and how it is presented. I think 100k is a good number, but is it the best number for:
I don't think I know how to answer that, but I like clarifying that those are the things I think we're trying to maximize in choosing a pledge. Though it's already 100,000 and people are already pledging, so for that reason, it's probably the best!
I hear what you're saying here and I think it makes sense, but I also really agree with Raemon's response. I wonder if there is some hybrid approach that could get the best of both worlds, like having one big pledge for DC, but also having smaller, local pledges. Sorta like what I mentioned in my other comment about distributed protesting.
tl;dr: You can pledge to join a big protest to ban AGI research at ifanyonebuildsit.com/march, which only triggers if 100,000 people sign up. You can also sign up to be notified (of that protest, and other potential protests that might take a different shape)
The If Anyone Builds It website includes a March page, wherein you can pledge to march in Washington DC, demanding an international treaty to stop AGI research if 100,000 people in total also pledge.
I designed the March page (although am not otherwise involved with March decisionmaking), and want to pitch people on signing up for the "March Kickstarter."
It's not obvious that small protests do anything, or are worth the effort. But, I think 100,000 people marching in DC would be quite valuable because it showcases "AI x-risk is not a fringe concern. If you speak out about it, you are not being a lonely dissident, you are representing a substantial mass of people."
The current version of the March page is designed around the principle that "conditional kickstarters are cheap." MIRI might later decide to push hard on the March, and maybe then someone will bid for people to come who are on the fence.
For now, I mostly wanted to say: if you're the sort of person who would fairly obviously come to a big MIRI-run March in DC to stop superintelligence development if it was sufficiently big, please take 30 seconds to go to the March page and enter your email and click "submit."
I'd particularly like to get the signup count over 1,000 (currently it's at 711), which feels like the minimum number where I expect people to look at it and think "okay maybe that might happen" so there's more chance of it snowballing.
If you would come to very similar March but object to details of the current framing, please let me know in the comments, and consider registering your email for the "Keep me informed" checkbox without making the commitment.
Probably expect a design/slogan reroll
The current March is very centered around the book. I chose the current slogan/design expecting that, if the March ever became a serious priority, someone would put a lot more thought into what sort of slogans or policy asks are appropriate. The current page is meant to just be a fairly obvious thing to click "yes" on if you read the book and were persuaded.
(To give some example: one alternate framing I considered and pitched MIRI on was "Build an Off Switch", where the immediate ask is more like "Lay some political and infrastructure groundwork to make it at least possible to stop AGI development later." There are a few other frames that have been discussed.)
My personal guess (not speaking for MIRI) is a protest this large necessarily needs to be a bigger tent than the current design implies, but figuring out the exact messaging is a moderately complex task. Right now it just seemed worth spending any effort at all on letting people know on LW who might just obviously want to signup for the current thing.
Pledge, or Signup to be Notified
FAQ
(From the March page)
What's the goal of the Don't Build It march?
We're calling for an international treaty to ban the development of Artificial Superintelligence (ASI).
Why?
We believe that if any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.
We do not mean that as hyperbole. We are not exaggerating for effect. We think that is the most direct extrapolation from the knowledge, evidence, and institutional conduct around artificial intelligence today.
Why do you think that?
AI companies are racing to create artificial superintelligence, while having no idea what they’re doing. This is a bad idea according to theory. And in practice, the warning signs are mounting.
We lay out the argument as clearly and concisely as we can in our book. You can also read our online resources, which go into some details not covered in the book.
Why does the pledge only take effect if 100,000 people pledge to march?
Marches can be very powerful if they’re large, but can send the wrong message if they’re small. So we're only asking that you pledge to march if enough other people will join in. If we reach 100,000 pledges, we will schedule the march. If not, we’ll put our efforts towards other strategies instead.
We might reach critical mass soon. Or it might take months or years, as more people learn about the dangers of ASI.
What do you mean by "international treaty"?
It's not useful for only one country to ban advancement of AI capabilities within its own borders. AI development would just keep happening in other countries by people who didn’t understand the dangers, until eventually someone somewhere built machines that were substantially smarter than any human.
This isn't an issue where lawmakers can pass a law inside their own country to protect their own people. If anyone builds machine superintelligence, everyone dies, everywhere. A problem like this requires multiple major powers of the world to unite in a shared agreement to stop this reckless suicide race.
We call on our leaders to declare support for such a treaty, and to begin drafting it now. We call on the whole world to unite to avert the danger.
It would not be the first time that humanity has put a stop to a technological race. Humanity has backed off from creating super-large nuclear warheads, and from creating some of the most dangerous bioweapons. We’ve done it before, and we can do it again.
How much notice will there be for the actual march?
We’ll aim to give at least one month notice once we hit 100,000 pledges, unless we crossed the threshold on account of some pending emergency.
What if I don't want to commit to marching in D.C. yet?
You can also just sign up for updates! We'll let you know when our primary march reaches 100,000 pledges.
If and when we host a large march in Washington D.C., we might also wind up hosting many smaller marches in other cities around the world, depending on interest and other factors.
Link again to Pledge or Signup to be Notified, for convenience