I occasionally see the "unilateralist's curse" invoked as a rationale for censorship in contexts where I am very suspicious that the actual reason is protecting some interest group's power. But if I'm alone in such suspicions, then maybe that means I'm just uniquely paranoid. To help sort out what's what, I consulted the paper by Nick Bostrom, Anders Sandberg, and Tom Douglas in which the term was coined.
The main argument (as the authors note under the keyword "winner's curse") is basically an application of regression to the mean: if N agents are deciding whether to do something on the basis of its true value V plus random error term E, then someone with a large positive E might end up doing the thing even if V is actually negative—and the problem gets worse for larger N.
Crucially, Bostrom et al. note:
[T]hough we have thus far focused on cases where a number of agents can undertake an initiative and it matters only whether at least one of them does so, a similar problem arises when any one of a group of agents can spoil an initiative—for instance, where universal action is required to bring about an intended outcome. [...] Thus, in what follows, we assume that the unilateralist's curse can arise when each member of a group can unilaterally undertake or spoil an initiative (though for ease of exposition we sometimes mention only the former case).
The veto held by members of the United Nations Security Council is given as an illustrative example of unilateral spoiling. This re-framing of the underlying statistical insight (the unilateral veto being "dual" to the unilateral act) seems relevant to its application to censorship: an author deciding to publish a blog post (even if other forum members think it's harmful) is in the position of taking unilateralist action—but so is a member of a board of pre-readers of whom any one has the power to censor the post (even if the other reviewers think it's fine).
It occurs to me that a karma system (such as that used on this website) has the potential to be an adequate check against the unilateralist's curse as described by Bostrom et al. if we assume that the penalty applied to downvoted posts is sufficient to prevent the harm of the putative infohazard. If some possible post is infohazardous (say, doxxing someone's home address), most users will correctly know not to post it. If one user erroneously decides that doxxing is good (as if having "rolled" an anomalously high error term), we expect their post to be downvoted to oblivion by the supermajority who knows that doxxing is bad. Conversely, while a net-upvoted post might still be infohazardous, the harm from the post should not be attributed to the unilateralist's curse.
(Thanks to David Manheim's comments on "Credibility of the CDC on SARS-CoV-2" for the inspiration.)