Some key events described in the Atlantic article:
Kirchner, who’d moved to San Francisco from Seattle and co-founded Stop AI there last year, publicly expressed his own commitment to nonviolence many times, and friends and allies say they believed him. Yet they also say he could be hotheaded and dogmatic, that he seemed to be suffering under the strain of his belief that the creation of smarter-than-human AI was imminent and that it would almost certainly lead to the end of all human life. He often talked about the possibility that AI could kill his sister, and he seemed to be motivated by this fear.
“I did perceive an intensity,” Sorgen said. She sometimes talked with Kirchner about toning it down and taking a breath, for the good of Stop AI, which would need mass support. But she was empathetic, having had her own experience with protesting against nuclear proliferation as a young woman and sinking into a deep depression when she was met with indifference. “It’s very stressful to contemplate the end of our species—to realize that that is quite likely. That can be difficult emotionally.”
Whatever the exact reason or the precise triggering event, Kirchner appears to have recently lost faith in the strategy of nonviolence, at least briefly. This alleged moment of crisis led to his expulsion from Stop AI, to a series of 911 calls placed by his compatriots, and, apparently, to his disappearance. His friends say they have been looking for him every day, but nearly two weeks have gone by with no sign of him.
Although Kirchner’s true intentions are impossible to know at this point, and his story remains hazy, the rough outline has been enough to inspire worried conversation about the AI-safety movement as a whole. Experts disagree about the existential risk of AI, and some people think the idea of superintelligent AI destroying all human life is barely more than a fantasy, whereas to others it is practically inevitable. “He had the weight of the world on his shoulders,” Wynd Kaufmyn, one of Stop AI’s core organizers, told me of Kirchner. What might you do if you truly felt that way?
“I am no longer part of Stop AI,” Kirchner posted to X just before 4 a.m. Pacific time on Friday, November 21. Later that day, OpenAI put its San Francisco offices on lockdown, as reported by Wired, telling employees that it had received information indicating that Kirchner had “expressed interest in causing physical harm to OpenAI employees.”
The problem started the previous Sunday, according to both Kaufmyn and Matthew Hall, Stop AI’s recently elected leader, who goes by Yakko. At a planning meeting, Kirchner got into a disagreement with the others about the wording of some messaging for an upcoming demonstration—he was so upset, Kaufmyn and Hall told me, that the meeting totally devolved and Kirchner left, saying that he would proceed with his idea on his own. Later that evening, he allegedly confronted Yakko and demanded access to Stop AI funds. “I was concerned, given his demeanor, what he might use that money on,” Yakko told me. When he refused to give Kirchner the money, he said, Kirchner punched him several times in the head. Kaufmyn was not present during the alleged assault, but she went to the hospital with Yakko, who was examined for a concussion, according to both of them. (Yakko also shared his emergency-room-discharge form with me. I was unable to reach Kirchner for comment.)
On Monday morning, according to Yakko, Kirchner was apologetic but seemed conflicted. He expressed that he was exasperated by how slowly the movement was going and that he didn’t think nonviolence was working. “I believe his exact words were: ‘The nonviolence ship has sailed for me,’” Yakko said. Yakko and Kaufmyn told me that Stop AI members called the SFPD at this point to express some concern about what Kirchner might do but that nothing came of the call.
After that, for a few days, Stop AI dealt with the issue privately. Kirchner could no longer be part of Stop AI because of the alleged violent confrontation, but the situation appeared manageable. Members of the group became newly concerned when Kirchner didn’t show at a scheduled court hearing related to his February arrest for blocking doors at an OpenAI office. They went to Kirchner’s apartment in West Oakland and found it unlocked and empty, at which point they felt obligated to notify the police again and to also notify various AI companies that they didn’t know where Kirchner was and that there was some possibility that he could be dangerous.
Both Kaufmyn and Sorgen suspect that Kirchner is likely camping somewhere—he took his bicycle with him but left behind other belongings, including his laptop and phone.
...
The reaction from the broader AI-safety movement was fast and consistent. Many disavowed violence. One group, PauseAI, a much larger AI-safety activist group than Stop AI, specifically disavowed Kirchner. PauseAI is notably staid—it includes property damage in its definition of violence, for instance, and doesn’t allow volunteers to do anything illegal or disruptive, such as chain themselves to doors, barricade gates, and otherwise trespass or interfere with the operations of AI companies. “The kind of protests we do are people standing at the same place and maybe speaking a message,” the group’s CEO, Maxime Fournes, told me, “but not preventing people from going to work or blocking the streets.”
This is one of the reasons that Stop AI was founded in the first place. Kirchner and others, who’d met in the PauseAI Discord server, thought that that genteel approach was insufficient. Instead, Stop AI situated itself in a tradition of more confrontational protest, consulting Gene Sharp’s 1973 classic, The Methods of Nonviolent Action, which includes such tactics as sit-ins, “nonviolent obstruction,” and “seeking imprisonment.”
...
Yakko, who joined Stop AI earlier this year, was elected the group’s new leader on October 28. That he and others in Stop AI were not completely on board with the gloomy messaging that Kirchner favored was one of the causes of the falling out, he told me: “I think that made him feel betrayed and scared.”
Going forward, Yakko said, Stop AI will be focused on a more hopeful message and will try to emphasize that an alternate future is still possible “rather than just trying to scare people, even if the truth is scary.” One of his ideas is to help organize a global general strike (and to do so before AI takes a large enough share of human jobs that it’s too late for withholding labor to have any impact).
Read the rest of the article here. You can find my personal strong takes at the bottom.
Overall, I feel responsible for not having picked up on the possibility that Sam could act out to this extent. There were frictions in coordination, and considerations where I as an advisor and the organisers on the ground were pushing back, but I had not expected this.