There are several things one can do when having beliefs outside the Overton window:
I agree that 3 tends to backfire. But it is usually unclear what to do instead.
Not sure if it was the LLM-ification or the original bulletpoints, but I think this essay was a bit confusing and rambly to read and also smelled of LLM by the end (I thought this before I saw the note). So, just a datapoint that IMO whatever you are doing isn't working well IMO. No offense intended.
On the object level I think I basically agree with most of what you are saying.
I'm somewhat confused by what people mean by "strategic" in these discussions. It seems to me like there are aspects of communication that I would call "stategic" but are uncontroversial. I would suggest the "grown not crafted" idea from IABIED. I think this is a extremely succinct and effective way to communicate the underlying technical idea, but of course expressing the idea in more technical language would also be truthful (arguably even more so). The difference isn't that the "grown not crafted" way of communicating the idea is more truthful, but (I would infer) that it is predicted ot be more helpful in allowing others to understand the idea, even if they aren't already famiiar with relevant technical knowledge. In other words, it is a more strategic way of communicating the idea.
I don't think that example is particularly controversial, but I think these discussions are meant to also apply to statements that are highly contentious and subject to adversarial dynamics. When we consider statements that are more in those domains, I think we enter a kind of "messy middle" where the truthfulness of a statement is much more contested and it isn't necessarily as easy to seperate the core content from the communication strategy.
Let's take the "DEFUND THE POLICE" example from the post. I definitely believe that some people engage in the "window-stretching" as you describe. But I could also imagine proponents of the slogan saying something like this:
"I genuinely believe that current policing practices are so unjust that the police deserve to be defunded. It would be better for policing to be reformed to correct these injustices while still carrying out legitimate policing functions. But saying I want to "defund the police" is genuinely true under the current system. If I only advocated for more centrist reforms that would actually be misleading about what I believe and trying to fit my beliefs into the overton window, not the other way around. Its extremely important to use slogans like "DEFUND THE POLICE" when you actually believe in them because the powerful simply ignore centrist calls for reform. They need to know how strongly we actually feel about this. Calling for measures such as defunding is the only way to have our voices heard".
I think such a view lives in this messy middle. People who agree are likely to see it as genuine while critics will be tempted to claim inappropriate use of stategy. The issue of whether the person who holds this view is being "truthful" essentially collapses back into the underlying object-level issues. Those who agree on the object-level will also agree of the meta question of whether what the person is doing is acceptable, and the reverse for someone who disagrees.
As I allude to here I think the best approach is to simply argue over the object-level question, and leave aiside the dicussions about whether someone is being stategic, how truth-seeking they are, if they are "gaslighting" etc.
I'm somewhat confused by what people mean by "strategic" in these discussions.
Apologies, I tried to make this clear: I am referring to "high-dimensional discourse chess" that requires asserting or assuming "we can model how public acceptability shifts and cleverly intervene to steer those shifts." That's not about communicating an idea, it's about the goal of convincing people of something in order to have them react in order to change the pubic acceptability of another thing.
Let's take the "DEFUND THE POLICE" example from the post.
Of course, not everyone who said "Defund the police" was playing games - some were true extremists, including leftist anarchists. But far more were being duped by those trying to play those games, advocating for what they thought was good based on social proof, almost always without a coherent replacement in their mind. That is, they generally weren't advocating anarchy, they were advocating replacing police with something else undefined. It seems you are saying something similar; you dislike the current system, and say it should be replaced instead of reformed, but don't have a clear argument for the details.
I think the best approach is to... leave aiside the dicussions about whether someone is being stategic, how truth-seeking they are, if they are "gaslighting" etc.
I agree - and if anything, think it's exactly in line with the post's conclusion? Specifically, in the post, I don't argue that readers should dismiss anyone else's views for playing discourse games, I argue that they should not attempt them.
The only reason I feel comfortable using the Defund the Police example is because the leaders of the movement were explicit in their intent to widen the window. For example, "Progressive Congresswoman Alexandria Ocasio-Cortez called the use of language like “defund” an “excellent choice” for those who have been trying for years to “prompt a national conversation” about police. “‘Refund’ or ‘reallocate’ didn’t do that,” she tweeted. The shocking language of the slogans can help shift the Overton Window, making what might otherwise be politically controversial interventions more palatable. "
And I wasn't guessing about EA either; I have been in the room, repeatedly, when senior people in EA talked about shifting the Overton window on AI risk. So yes, don't accuse others of doing this, but that doesn't mean you can't call them out when they say they are doing it!
Apologies, I tried to make this clear: I am referring to "high-dimensional discourse chess" that requires asserting or assuming "we can model how public acceptability shifts and cleverly intervene to steer those shifts." That's not about communicating an idea, it's about the goal of convincing people of something in order to have them react in order to change the pubic acceptability of another thing.
No need to apologize, the point of having discussions is th hash these things out, right? And any error or misunderstand may also me mine rather than yours.
The thing that I find confusing is I feel like I get the vibe that is meant, but I don't understand what the brightline or criteria are for when something is acceptable vs violates some norm. "High-dimensional discourse chess" to me kind of leans into this rather than clarifying the issue. I get that sense that people mean to address discursive strategies that in some way misleading, bad faith, "not truth-seeking", or similar "vibes". But in my view, whether these aspects apply to a person's statements or a group's rhetoric are likely to be contested, and the disagreement about those meta-questions are likely to have a high degree of overlap with the underlying object-level disagreement. I am worried that this causes a dynamic where people who disagree on the undelying issue often get into meta-debates about who is doing bad discource stuff, and that this is often unproductive.
It seems you are saying something similar; you dislike the current system, and say it should be replaced instead of reformed, but don't have a clear argument for the details.
I strongly disagree with proponents of the "defund the police" slogan. Literally nothing I said has anything to do with my views on policing. I brought it up purely to address it in the context of your OP, as an example of the phenomenon under discussion. My personal interest in this has nothing to do with the defund the police example itself, and I think this goes to my concern about how meta-discussions often tend to meander to unproductive topics.
That said, I'm curious what you think that supporters of the slogan would feel about your argument. My suspicion is that they wouldn't think that the issue is as clean cut as you describe. I will avoid trying to describe what arguments they might make since I don't want violate my own advice and get into the weeds on a topic that I view as a sideshow, but hopefully you can see my point here. When people have strong disagreements, accusations of meta-level bad behavior that seems obvious to a person on one side of the underlying issue may not seem as obvious to someone on the other side.
And I wasn't guessing about EA either; I have been in the room, repeatedly, when senior people in EA talked about shifting the Overton window on AI risk. So yes, don't accuse others of doing this, but that doesn't mean you can't call them out when they say they are doing it!
Part of my interest in these discussions is that I have zero interactions with anyone in the EA/Rationalist spheres other than reading stuff online and in more recent years doing my own posting. I don't have any inside knowledge, but I find statements like this very interesting and informative because I would like to understand what is going on to the extent I can. I find this difficult however, because I often find myself in the situation where when the statements being evaluated are public, I feel like people sometimes are overreading them. My read of many public statements that attract allegations of inappropriate discourse game playing is that they often appear to be downstream of disagreements on the object-level issues.
So yes, don't accuse others of doing this, but that doesn't mean you can't call them out when they say they are doing it!
I'm not really sure how to take your statement as not being an accusation of EAs doing the thing you are criticizing. Are you saying that pronenent EAs would read what you have said here and publicly say "yes, we are doing this"? If not, I don't think it really makes sense to say you are just calling out what they say they are doing. I understand you have non-public information here and of course you would use all the information you have access to to inform your own views. The thing that is difficult for me, as someone working off of only public info, is how to come to my own understanding of what is going on. If these people aren't openly saying what they are doing, I can't rely on "but they are saying it!" as something that shows that the behavior in question is over the line.
Edit: I appear to be rate-limited, so I will add my clarifications here (hopefully that isn't breaking any rules, appologies if so), and then comment once the limit is up (this and all below was added after the subsequent comment).
I don't think there is a bright line, there's just a point being made about a gradient where discourse chess is on one side, and talking about object level facts in on the other. And I pointed out that on the chess side, people suck at getting what they want.
Perhaps I misunderstood you then. My perspective is something like this:
Speed of a car on a highway is a gradient, but if you are caught speeding, for practical purposes we might break down that gradient into buckets. If you are going 61 mph in a 60 zone, I have zero problem. If you are going 200 mph in a 60 zone, I have zero problem with you doing time in prison. Somewhere in there is a "messy middle" where people might reasonably disagree, but the disagreement is within acceptable parameters. Maybe I try to stay with 10 mpg of the speed limit but for you its 15. We disagree, but its a prison vs no-prison magnitude of disagreement. Even if think you should stay below 70 I don't think 71 deserves jail time.
The impression I got from your post, which may be mistaken, is that you think people who behave in a way that is too strategic are doing something that is bad or wrong. It seemed to me like you thought that "crossing a line" in this regard is or should be a norm-violation. I'm of the view that while in any given case what a person does along these lines might not be ideal, it shouldn't be considered a norm violation. I think it is often very messy to distinquish strategic/gameplaying/intended-overton-shifting speech from "just saying what you think". You could write something that to you feels like you are just putting your best understanding of the truth out there, but which someone else feels like is being too strategic. I think when people speech, they should avoid the "norm-violating" versions of strategic speech but shouldn't focus too much on whether or not their speech is strategic or not.
So I agree that imputing this type of behavior, as an accusation, is worrying, but that's different than pointing out when the behavior was in fact intentional.
For clarity, when I say it seems like you're making an accusation, I'm not saying this accusation is wrong. If a person says "I saw so-and-so commit this crime" that is accusing that person of a crime. Its not bad to correctly accuse people of things! My issues is I feel like its often unclear what the accusation is, or (like I in this case I think?) if there even is an accusation. To me it seemed like you were saying that some EAs did an intentially bad thing, and it was bad of them to do it, I'm trying to understand if that is correct or not basically (and if you are saying they did something bad, what that thing is).
The thing that I find confusing is I feel like I get the vibe that is meant, but I don't understand what the brightline or criteria are for when something is acceptable vs violates some norm.
I don't think there is a bright line, there's just a point being made about a gradient where discourse chess is on one side, and talking about object level facts in on the other. And I pointed out that on the chess side, people suck at getting what they want.
Similarly for the other points, I don't really care about exact lines, I'm not being prescriptive.
I'm not really sure how to take your statement as not being an accusation of EAs doing the thing you are criticizing.
You should read the last paragraph of my response. I don't need to ask what prominent EAs would think, since I know what they did, in fact, say. And you may not have been around, but if you look at older posts on the EA forum, this wasn't exactly a secret, it was a declared intention. And looking back, Rob was complaining about this 5 years ago. And here's complaining about related issues 7 years ago. So I agree that imputing this type of behavior, as an accusation, is worrying, but that's different than pointing out when the behavior was in fact intentional.
I an not sure that critics of defunding the police claimed an inappropriate use of strategy. One of them implied that defunding the police is a luxury belief, i.e. a phrase which is both used as a status symbol and ensures that attempts to implement it differentially hurt the poor people, not the rich ones who tried to spread the meme.
I am stating, not implying, that the galaxy brain plan for stretching the Overton window backfired on them. But I'm not criticizing the position, I'm criticizing the efficacy of the strategy. Whether it was a luxury belief seems like a mostly separate point, though correlated in that such clever strategies do seem to be more likely to be pursued by the educated and wealthy.
I only addressed the defund the police issue because it is used as an example in the OP, I'm not trying to describe critics of the slogan in general. "Inappropriate use of strategy", is my way of describing the view set forth in the OP, because I think it relates to people "speaking strategically" vs "just saying what they think".
The standard shape of conspiracy theories (and truths, sometimes) is that YOU (or more generally your tribe or the good guys) cannot manipulate this very well at all, but THEY do it all the time.
Like so many things, it doesn't generalize very well. so much depends on which audiences you're trying to sway, how far the idea-distance is, what adversarial (or just alternate-focused) sources of movement/discussion are happening, what extraneous-to-you-but-critical-to-them other topics are taking their attention, etc.
That's a really good point.
I think it's selection bias on the conspiracies finding those that are the most plausible or most successful, and accurate perception of how poorly the examples you've seen personally work out.
Not commenting about the whole conflict here. But I remember being once told by an EA member to better not go to some event with policymakers in the UK because my views sound a little too crazy.
I don't think that advice is obviously wrong - the costs of being outside are real, and so there are definitely times when staying inside the Overton window is important. Not alienating policymakers seems like a good place to simply not talk about certain things, at least sometimes. (c.f. Eliezer's Time article being laughed at during the White House press briefing. That worked out fine, Eliezer wasn't trying to stay in the good graces of policymakers. Someone from CSET writing the same would have been a mistake.)
So it's further evidence that EAs are sensitive to the questions, but not an example of what I think Rob was criticizing, and as noted, not something I think was the wrong call.
Sometimes, people don't say what they actually think, not because saying it would be rude or costly, but because they believe saying it now would be counterproductive. They see that the true claim is outside the Overton window. And they conclude that the strategic play is to say something weaker, something adjacent. That will let you normalize the frame without triggering the immune response. You will redesign the house a bit now so that you can slide the window later. Then, when the ground has shifted, you imagine, the real claim becomes sayable.
Strategic discourse chess?
The above is an attempt at high-dimensional discourse chess. In politics and the world of ideas, it seems that people play it constantly. But building on a recent comment by Rob Bensinger, I want to argue that the conceit behind playing, that we can model how public acceptability shifts and cleverly intervene to steer those shifts, is usually wrong. Not wrong in the sense that discourse has no structure, or to argue that framing never matters, but that it fails.
Most people vastly overestimate their ability to predict second- and third-order effects of anything, including strategic speech. And this is a more damaging error than you might expect. The Overton window is real enough as rough description, but you won't get to redesign the game board by yourself. And if you try to use the window to navigate, it becomes completely opaque.
Despite that, people routinely substitute strategic positioning for plain statement, the simulacra level shifts upward, and arguments get made for their imagined downstream effects rather than on their merits. Movements distort their own public positions and then lose track of the distortion. The hedged version becomes the one newcomers learn about, and the original assessment survives only in private conversations. When talking to others, they need to "peel back layers upon layers of bullshit priors to even begin to rebuild the correct foundational assumptions on which anything you want to discuss must be rebuilt."
Yes, Overton windows exist, but...
Any society has zones of easy speech, costly speech, and nearly unspeakable speech, and those zones move. Repetition changes salience. Institutions confer or withdraw legitimacy. Crises make previously marginal ideas suddenly concrete. None of this is controversial, and none of it is what I am arguing against.
The error begins when a rough descriptive metaphor gets promoted to a causal model, and that causal model licenses departure from simple communication. "Shift the window" doesn't work when there are dozens of windows being used by different people, and you don't know which of them can be moved, by whom.
Saying that discourse has shifting boundaries is a true claim, one that helps yourself and others understand costs and make decisions. But moving from there to saying that you, or some other given person, can reliably forecast how their speech acts will move those boundaries (through chains of intermediaries, coalition responses, media distortion, and counter-mobilization) is a very different claim. The first is important social observation. The second is a prediction about a complex adaptive system, and it should be held to the standards we normally apply to such predictions. And even if we had no moral compunction about lying, perhaps by omission, perhaps by shading the truth and making weaker statements than those we believe, we should still not do so if the prediction about capability to manipulate others is incorrect.
...can they be reliably manipulated?
So we can lay aside the moral argument, though one wishes it were sufficient, to ask whether the predicted ability to manipulate social reality is correct. Consider what you would actually need to know to execute a successful higher-order discourse strategy. Not just "what happens when I say X," but "what happens because others react to my saying X, and because still others react to those reactions, and because institutions update on the pattern."
You would need to know which audiences matter, which intermediaries will amplify or reframe your statement, how opposing coalitions will interpret the move — not just what they will think of the claim, but what they will infer about you, your coalition, and the trajectory of the dispute. You would need to know whether the framing you introduce will remain yours or get captured and repurposed by opponents. It is easy to think your picture is the same as the window, but it's hard to know when you can't see through the version in your head. In practice, it seems like nobody knows these things at the resolution that confident strategy requires - even though thinking otherwise is, as Magritte kind-of said, the human condition.
Worse, the causal pathways between a speech act and its downstream effects are partly hidden, partly unstable, and partly shaped by the behavior of people who are themselves trying to game the same system. The painting actually changes the landscape behind it. Feedback loops run through media, institutions, and coalition dynamics that are individually hard to model and collectively beyond the reach of the precision that "I will shift the window incrementally over three years" demands. Markets exist and price movements are real, but most people cannot profitably trade on macro narratives. The Overton window is the same kind of thing — it points at something real without giving you a dashboard.
Why would you think this could work?
A large part of the overconfidence comes from narrative availability, that is to say, post-hoc selection bias. Discourse shifts are easy to explain after the fact, even when they are very hard to forecast before. Once gay marriage reached majority support, or once the Iraq War became broadly unpopular, you could tell a clean retrospective story about how acceptability moved. The framing shifted here; the key event was there; the tipping point was this. But for every retrospective narrative that sounds compelling, there are dozens of alternative pathways that would have sounded equally plausible in advance and did not materialize. Nobody writes the postmortem on the strategic frame that vanished without effect.
Smart, politically engaged people are especially vulnerable here. They are immersed in discourse, they track symbolic moves constantly, and they see lots of local reactions in their own milieu that they mistake for system-level visibility. A policy intellectual watches their essay circulate within their corner of Washington and concludes they understand how public opinion mechanics work. But the visible reactions within a narrow professional circle are a wildly unrepresentative sample of how a broader, messier, more inattentive public will respond.
The case of AI Safety
Getting back to the conversation that spawned the very long essay, the effective altruism movement's long strategic deliberation around AI risk messaging is a case in point. For years, many people in the community believed that advanced AI posed serious and/or existential risks, but worried that saying so plainly would be alarmist, and place the concern outside the window of respectable policy discussion. The public vocabulary was carefully modulated: emphasize near-term harms, speak in technical terms about "alignment," build credibility with the ML establishment before making stronger claims, avoid the giggle factor. The strategic logic was explicit and constantly discussed within the community.
As Rob Bensinger recently said, directly inspiring my analysis, "EAs' attempts to play eleven-dimensional chess with the Overton window are plausibly worse than how scientists, the general public, and policymakers normally react to any technology under the sun that sounds remotely scary or concerning or creepy." I agree, but also want to point out that Rob's statement is also the kind of discourse retrodiction that I'm condemning.
To explain, I'll first try to make the story clear. The Lesswrong Rationalists, led by Yudkowsky, started thinking and worrying about AI risks. Mountains of digital paper were spilled on the technical concerns and reasons to expect the risk to be existential. Bostrom took up the mantle, while sitting in literally the same office as MacAskill and CEA. But rationalist groups were trying to be careful about noticing the skulls, while as Rob said, EAs were more politically savvy, and didn't want to talk too loudly about the fanaticism; it was recognized more quietly in academic papers, but most of the movement tried to downplay any direct claims about extinction, and talked more about Global Catastrophes instead, while meaning existential risk. (I am certainly guilty of this, e.g., conflating Global Catastrophe and extinction.)
But while the EAs were too cleverly avoiding saying that if anyone builds ASI, everyone will die, the public became intensely interested in AI essentially overnight. Prominent figures outside the EA community started talking about extinction risk without any of the careful stage-setting that was supposedly necessary. The discourse moved because of an exogenous technological shock, not because of the framing strategy. And when the moment of public attention arrived, the community's public positioning was evidently more hedged and less clear than its private beliefs. The years of strategic patience had not moved the window; they had moved the movement's own voice away from what its members actually thought, leaving them less prepared to make the direct case when it suddenly mattered.
I don't want to overstate this, in two ways. First, most of the credibility-building during those years seems to have helped. There may even be cases where the patient framing work around what to say laid groundwork that paid off in ways I can't trace. But the broad shape of what Rob outlined, that is, years of strategic hedging, an exogenous shock that moved the debate on its own terms, a community caught flat-footed by its own caution, is suggestive, even if any individual judgment call during that period might have been defensible at the time.
But second, this overstates the EA community's confidence in the existence of existential threats from AI. There were, in fact, and still are, very clear splits between the most and least worried. Unsurprisingly, these were unclear both externally, and internally. There was supposedly consensus about EA priorities, even when there shouldn't have been, because actual moral views differed. But as I said there, "cause prioritization is communal, and therefore broken" - and [as I said afterwards](Cause Prioritization is Communal, and Therefore Broken), the community was illegible and confused - they needed to clarify views and fight back against the false consensus.
Pushing back is also manipulation.
So the false consensus effects are a real danger, and one that I think came back to bite the community. But when Scott Alexander says "Hey, I partly disagree with the way this is being communicated, and I'd like to give other people social permission to disagree too," this is partly pushing back against consensus narratives in the way I think was needed, but it was also explicitly pushing for a second order effect of expanding the Overton window.
As should be obvious, I think that's both good and bad. The correct point is that truth doesn't always win, and the communication is hard. (See: Wiio's laws.) Scott was exactly correct to say that we need to point out when we disagree. But in a meta-conversational discussion about what to say and what not to say in order to have some preditable effect on what others will and won't say, any given views are usually not even wrong. The part where Scott says that he disagrees seems great, the part where he does so to change the discourse seems bad. (But he agrees that he's wrong: "I have the idiotic personality flaw that I believe if I just explain myself well enough, everyone will agree that I am being fair and that everything was a misunderstanding. I agree this is stupid...")
Even first-order effects of speech are hard to predict. You say a thing; different audiences hear different things; media ecosystems select and distort; opponents choose whatever interpretation serves them. Even at this level, confident forecasts are regularly wrong.
Second-order effects are worse by a combinatorial factor. Now you are predicting not just direct reactions but reactions to reactions: allies updating their models of you, enemies mobilizing, neutrals inferring coalition identity, institutions reclassifying what kind of actor you are, opportunists hijacking whatever frame seems newly available. Each response feeds back into the others, and each actor is themselves strategizing, which means the system is reflexive — your attempt to game it changes it.
By the time someone goes past what Scott did, and reaches the third order version of "I don't actually endorse this claim, but expressing it now will make a related claim easier to advance in two years, because the discourse will have shifted in the following way," they are writing speculative political discourse fiction. The number of intervening variables is too large and the environment too sensitive to outside shocks for this kind of planning to deserve the word "strategy."
Again, this error is understandable. Because selection effect reinforces the idea that it can work. The rare cases where multi-step discourse strategy appears to have worked become famous teaching examples, the ones people cite when defending the practice. Of course, the far more common cases where it failed are never labeled as strategic failures. They vanish into the mass of political speech that went nowhere. People learn from a highlight reel and conclude the game is winnable. You want examples? Look at decades of Animal rights advocacy trying to play the game of pushing meat-eating outside the Overton Window, using tactics ranging from paint throwing to billboards to violence.
But there's another mistake that happens, because there is also a simpler and less flattering explanation for the prevalence of strategic overconfidence generally. It is gratifying to see yourself as a subtle navigator of opinion dynamics, and less gratifying to admit that you are mostly guessing. "This would be counterproductive" is often the most prestigious available way to avoid saying something costly. I do not think every instance of strategic reticence is rationalized cowardice. But the opacity of the system makes it very hard to tell when it is and when it isn't, and the people doing it are in the worst position to judge.
Another real-world example: Defund the Police
What does this look like when the strategic logic gets tested against a real adversarial environment?
"Defund the Police" in 2020 was an explicit, self-conscious exercise in Overton window strategy. After the murder of George Floyd, activists adopted the slogan on a specific theory: by staking out a maximalist position, you shift the window so that more moderate reforms — reallocating some police funding to social services, civilian oversight, community investment — seem centrist by comparison. This is textbook window-stretching. The logic sounds clean in the abstract.
What actually happened was that opponents, not allies, got to decide what the slogan meant in public. Republican strategists pinned "Defund the Police" to every Democrat on every ballot. Moderate Democrats spent the next two years trying to create distance from a position most of them had never held. The reforms that were supposed to look reasonable by comparison instead got tarred by association with the maximalist frame. Polling consistently showed the slogan was unpopular even among Black voters who strongly supported the underlying policy goals. The framing had become a barrier to the very reforms it was meant to enable.
The pattern is worth isolating, because it recurs. In an adversarial environment, you do not get to introduce a frame and then control how it propagates. Your opponents select the interpretation that serves them, media amplifies the version that generates engagement, and coalition dynamics pull the meaning away from your intention. The frame goes feral. You can see this in smaller episodes too, where framing devices meant to later support one view get captured and repurposed, and careful attempts at normalization instead trigger pre-emptive opposition. The strategist's error is often simply that they are modeling the discourse as though their move is the last move, when in reality every other actor is also playing.
The other common failure is quieter. Strategic silence curdles into self-censorship. People tell themselves they are waiting for a better moment, and the better moment never arrives because the calculation is unfalsifiable. It is always possible to say the time is not yet right. The gap between private views and public statements widens, and nobody can quite explain when the honest version was supposed to come out. For halfway inside, this is what much of the EA community's AI messaging looked like for years. And it is common enough in other movements that it should be treated as a default outcome rather than a surprising one.
Strategic discourse chess usually underperforms just saying what is true.
What, then, should you actually do[1]?
A direct argument, where you say what you think and explain why, has a property that strategic indirection lacks: others can engage it. Evidence can bear on it. Disagreement surfaces clearly rather than festering as mutual suspicion about what everyone really believes. You are not relying on a hidden causal chain between your speech act and some future state of public opinion. You are making a claim and seeing whether it holds.
This is not always rewarded. Truthful speech has no magical property that makes it persuasive, and plenty of true things have been said clearly and ignored for decades. I am genuinely uncertain about how far this norm extends — in legislative negotiation, in diplomacy, in actual political campaigns with professional strategists and tight feedback loops, the calculus may be different. But in the contexts where most people actually face this choice — writing, public argument, movement-internal discussion, intellectual life — directness has a practical advantage: you get usable feedback. You find out which objections recur, which parts of your view are wrong, who actually agrees versus who was nodding along out of coalition loyalty. If you never say what you mean, you never learn whether it is true.
And importantly, being honest doesn't imply being mean! As Scott Alexander suggested, Be Nice, At Least Until You Can Coordinate Meanness. I would emphasize the "at least." It's often beter to just be nice[2] and speak the truth. And this is even more critical in complex environments, where coalitions built around conflationary alliances fracture when the euphemisms get decoded, which they always eventually do. Coalitions built around stated disagreement about real claims at least know what they are agreeing and disagreeing about. If you want to work with the copyright absolutists and the artists unions and the taxi unions to regulate AI use and misuse, you should all know that you have different motives, so that you don't need to lie, or be too-cleverly strategic, either with your allies, or with your opponents.
The obvious conclusion
The Overton window exists. Acceptability shifts. Framing matters. None of this entitles you to the further claim that you understand how the game works well enough to play it at range. It certainly doesn't license you to censure others for how they speak.
My concluding advice didn't need multiple pages of stories and analysis. If you think something is true, usually say it. If you think it is false, usually do not say it. If your primary reason for departing from this is an elaborate theory about how public opinion dynamics will unfold over the next several years, you and others should be far more suspicious of yourself than commonly occurs[3].
But notice how the opacity of the system makes it easy to rationalize fear as prudence. When the strategic situation is genuinely unreadable, any level of caution can be dressed up as sophisticated restraint, and you can never be proven wrong because you never ran the counterfactual.
Most people who decline to say what they think for strategic reasons are not executing a plan. They are telling themselves a story about a plan. It's a good-looking plan because it's unfalsifiable; the relevant causal structure is unreadable. It's also self-serving, because it rebrands risk-aversion as sophistication.
Again, trying to launder weak truth claims through supposed strategic social effects is usually worse than stating the object-level view. You do not, in fact, know how the discourse game cashes out. The elaborate confidence is unjustified. The Overton window is real enough to constrain you but not readable enough to play like chess. If you cannot see around corners, stop pretending your silence is statesmanship, and don't lie, just tell people you aren't going to talk about it.
Note: After outlining and drafting some parts, this essay was fleshed out by an LLM (in the style of Rob Bensinger or Oliver Habrycka, depending on the section,) then carefully reviewed and edited heavily. Images were suggested or generated by LLMs.
Other than reading section titles before starting the section, so you know what they will say.
This should be obvious, but saying the true thing clearly is not the same as saying it with maximum abrasiveness to prove you don't care about social consequences. That is either its own kind of strategic posturing, subject to the same critique, or it's being a jerk, which isn't an excuse. The norm here is supposed to be honesty, not provocation.
All of that said, strategic sequencing does sometimes work. Gradualism has real success cases. Legal campaigns sequence arguments deliberately. Some claims genuinely need preconditions before they can land — shared vocabulary, institutional trust, background concepts that make the claim parseable.
None of this rescues the more complex general strategy for public conversations. The cases where strategic communication succeeds tend to share specific features: well-defined audiences, short causal chains, institutional backing, and tight feedback loops that let you correct course. Freelance discourse strategy across a diffuse, adversarial, multi-audience media environment has almost none of these. The success cases are precisely the ones that least resemble the normal situation.