It's very standard advice to notice when a sense of urgency is being created by a counterparty in some transaction; and to reduce your trust in that counterparty as well as pausing.
It feels like a valuable observation, to me, that the counterparty could be internal--some unendorsed part of your own values, perhaps.
Alignment is treated as the hard problem and cooperation as the nice-to-have. This priority is backwards.
Alignment is a technical problem that becomes dramatically easier or harder depending on the political environment it's solved in. Racing makes alignment harder: less time, more corner-cutting, fragmented talent, competitive pressure to deploy before you're confident. Cooperation makes alignment easier: more time, shared resources, no one to race against, freedom to be as careful as the problem requires. The standard objection is that cooperation is politically intractable, but a hard political problem that determines whether alignment can succeed outranks a technical problem pursued under conditions that practically ensure failure.
There's also a structural reason cooperation is potentially more achievable here than in previous arms races. With nuclear weapons, "my country has nukes" is straightforwardly good for me, as I trust my government more than the adversary. With ASI, "my country controls ASI" raises an immediate question: *who exactly?* The President? A lab CEO? A future leader I find dangerous? At every level of every hierarchy, almost nobody is the person who would actually wield the power — and for everyone else, the rational preference is that no single actor has unilateral control. Cooperation here isn't altruism, it's self-interest. And solving alignment without cooperation is actively destabilizing; it removes the shared existential risk that was the primary brake on racing, making competition more attractive and conflict more likely.
But there's a deeper issue. Suppose alignment is solved perfectly and you can build an ASI that does exactly what its operator wants. You've now created the most dangerous weapon in history and handed it to whoever finishes first. "Aligned to whom?" becomes the entire question, and it's not a technical question at all. A perfectly aligned ASI controlled by a single actor is an existential risk to everyone else. Alignment without cooperation solves the control problem and creates the domination problem. Cooperation without alignment still prevents unilateral control and preserves the conditions under which alignment can be attempted carefully. Whether alignment's success is good news for humanity depends entirely on whether cooperation came first.
Cooperation is the hard problem, alignment is downstream.