PauseAI and E/Acc Should Switch Sides
In the debate over AI development, two movements stand as opposites: PauseAI calls for slowing down AI progress, and e/acc (effective accelerationism) calls for rapid advancement. But what if both sides are working against their own stated interests? What if the most rational strategy for each would be to adopt the other's tactics—if not their ultimate goals? AI development speed ultimately comes down to policy decisions, which are themselves downstream of public opinion. No matter how compelling technical arguments might be on either side, widespread sentiment will determine what regulations are politically viable. Public opinion is most powerfully mobilized against technologies following visible disasters. Consider nuclear power: despite being statistically safer than fossil fuels, its development has been stagnant for decades. Why? Not because of environmental activists, but because of Chernobyl, Three Mile Island, and Fukushima. These disasters produce visceral public reactions that statistics cannot overcome. Just as people fear flying more than driving despite the latter being far more dangerous, catastrophic events shape policy regardless of their statistical rarity. Any e/acc advocate with a time horizon extending beyond the next fiscal quarter should recognize that the most robust path to sustained, long-term AI acceleration requires implementing reasonable safety measures immediately. By temporarily accepting measured caution now, accelerationists could prevent a post-catastrophe scenario where public fear triggers an open-ended, comprehensive slowdown that might last decades. Rushing headlong into development without guardrails virtually guarantees the major "warning shot" that would permanently turn public sentiment against rapid AI advancement in the way that accidents like Chernobyl turned public sentiment against nuclear power. Meanwhile, the biggest dangers from superintelligent AI—proxy gaming, deception, and recursive self-improvement
In Emergence of Simulators and Agents, my AISC collaborators and I suggested that whether consequentialist or simulator-like cognition (which one could describe as a subcategory of process-based reasoning) emerges depends critically on environmental and training conditions, particularly the "feedback gap" (the delay, uncertainty, or inference depth between action and feedback). Large feedback gaps select for instrumental reasoning and power-seeking; small feedback gaps select for imitation and compression. As examples, LLMs are trained primarily via SSL (minimal feedback gap) and display predominantly simulator-like behavior, whereas RL-trained AlphaZero is clearly agentic.
The dynamic you describe of patterns steering toward states where they have more steering capacity outcompeting other patterns is real, but may be context... (read more)