Cross-posted from the EA Forum. I would add that we are open to and positively interested in critiques of any aspect of effective altruism from users of this Forum and the rationalist community. We plan to respond to comments on the EA Forum, but may check this post less often.


tl;dr: We're running a writing contest for critically engaging with theory or work in effective altruism (EA). 

Submissions can be in a range of formats (from fact-checking to philosophical critiques or major project evaluations); and can focus on a range of subject matters (from assessing empirical or normative claims to evaluating organizations and practices).  

We plan on distributing $100,000, and we may end up awarding more than this amount if we get many excellent submissions. 

The deadline is September 1, 2022. You can find the submission instructions below. Neither formal nor significant affiliation with effective altruism is required to enter into the contest.

We are: Lizka Vaintrob (the Content Specialist at the Centre for Effective Altruism), Fin Moorhouse (researcher at the Future of Humanity Institute), and Joshua Teperowski Monrad (biosecurity program associate at Effective Giving). The contest is funded via the FTX Future Fund Regranting Program, with organizational support from the Centre for Effective Altruism.

We ‘pre-announced’ this contest in March

The rest of this post gives more details, outlines the kinds of critical work we think are especially valuable, and explains our rationale. We’re also sharing a companion resource for criticisms and red teams

How to apply

Submit by posting on the EA Forum[1] and tagging the post[2] with the contest’s tag, or by filling out this form.

If you post on the Forum, you don't need to do anything except tag your post[2] with the “Criticism and Red Teaming Contest” topic, and we’ll consider your post for the contest. If you’d prefer to post your writing outside the Forum, you can submit it via this form — we’d still encourage you to cross-post it to the Forum (although please be mindful of copyright issues). 

We also encourage you to refer other people’s work to the contest if you think more people should know about it. To refer someone else’s work, please submit it via this form. If it wins, we may reward you for this — please see an explanation below.

The deadline is September 1, 2022.

Please contact us with any questions. You can also comment here.


We have $100,000 currently set aside for prizes, which we plan on fully distributing.

Prizes will fall under three main tiers:

  • Winners: $20,000
  • Runners up: $5,000 each
  • Honourable mentions: $1,000 each

In addition, we may award a prize of $100,000 for outstanding work that looks likely to cause a very significant course adjustment in effective altruism.

Therefore, we’re prepared to award (perhaps significantly) more than $100,000 if we’re impressed by the quality and volume of submissions. 

We’re also offering a bounty for referring winning submissions: if you refer a winning submission (if you’re the first person to refer it, and the author never entered the contest themselves), you’ll get a referral bounty of 5% of the award.

We will also consider helping you find proactive funding for your work if you require the security of guaranteed financial support to enable a large project (though we may deduct proactive funding from prize money if you are awarded one). See the FAQ for more details.

Submissions must be posted or submitted no later than 11:59 pm BST on September 1st, and we’ll announce winners by the end of September.


Overall, we want to reward critical work according to a question like: “to what extent did this cause me to change my mind about something important?” — where “change my mind” can mean “change my best guess about whether some claim is true”, or just “become significantly more or less confident in this important thing.”

Below are some virtues of the kind of work we expect to be most valuable. We’ll look out for these features in the judging process, but we’re aware it can be difficult or impossible to live up to all of them:

  • Critical. The piece takes a critical or questioning stance towards some aspect of EA theory or practice. Note that this does not mean that your conclusion must end up disagreeing with what you are criticizing; it is entirely possible to approach some work critically, check the sources, note some potential weaknesses, and conclude that the original was broadly correct.
  • Important. The issues discussed really matter for our ability to do the most good as a movement.
  • Constructive and action-relevant. Where possible we would be most interested in arguments that recommend some specific, realistic action or change of belief. It’s fine to just point out where something is going wrong; even better to be constructive, by suggesting a concrete improvement.
  • Transparent and legible. We encourage transparency about your process: how much expertise do you have? How confident are you about the claims you’re making? What would change your mind? If your work includes data, how were they collected? Relatedly, we encourage epistemic legibility: the property of being easy to argue with, separate from being correct.
  • Aware. Take some time to check that you’re not missing an existing response to your argument. If responses do exist, mention (or engage with) them.
  • Novel. The piece presents new arguments, or otherwise presents familiar ideas in a new way. Novelty is great but not always necessary — it’s often still valuable to distill or “translate” existing criticisms.
  • Focused. Critical work is often (but not always) most useful when it is focused on a small number of arguments and a small number of objects. We’d love to see (and we’re likely to reward) work that engages with specific texts, strategic choices, or claims.

We don't expect that every winning piece needs to do well at every one of these criteria, but we do think each of these criteria can help you most effectively change people’s minds with your work.

We also want to reward clarity of writing, avoiding ‘punching down’, awareness of context, and a scout mindset. We don’t want to encourage personal attacks, or diatribes that are likely to produce much more heat than light. And we hope that subject-matter experts who don’t typically associate with EA find out about this, and share insights we haven’t yet heard.

What to submit

We’re looking for critical work that you think is important or useful for EA. That’s a broad remit, so we’ve suggested some topics and kinds of critiques below.

If you’re looking for more detail, we’ve collaborated on a separate post that collects resources for red teaming and criticisms, including guides to different kinds of criticisms, and examples. If you’re interested in participating in this contest, we highly recommend that you take a look. (We’d also love help updating and improving it.)

It’s helpful —but not required — to also suggest 1–3 people you think most need to heed your critique. For many topics, this nomination is better done privately (contact us, or submit through the form). We’ll send it their way where possible. (If you don’t know who needs to see it most, we’ll work it out.) 


You might consider framing your submission as one of the following:

  • Minimal trust investigation — A minimal trust investigation involves suspending your trust in others' judgments, and trying to understand the case for and against some claim yourself. Suspending trust does not mean determining in advance that you’ll end up disagreeing.
  • Red teaming ‘Red teaming’ is the practice of “subjecting [...] plans, programmes, ideas and assumptions to rigorous analysis and challenge”. You’re setting out to find the strongest reasonable case against something, whatever you actually think about it (and you should flag that this is what you’re doing).
  • Fact checking and chasing citation trails — If you notice claims that seem crucial, but whose origin is unclear, you could track down the source, and evaluate its legitimacy.
  • Adversarial collaboration — An adversarial collaboration is where people with opposing views work together to clarify their disagreements.
  • Clarifying confusions — You might simply be confused about some aspect of EA, rather than confidently critical. You could try getting clear on what you’re confused about, and why.
  • Evaluating organizations — including their (implicit) theory of change, key claims, and their track record; and suggesting concrete changes where relevant.
  • Steelmanning and ‘translating’ existing criticism for an EA audience — We’d love to see work succinctly explaining these existing ideas, and constructing the strongest versions (‘steelmanning’) them. You might consider doing this in collaboration with a domain expert who does not consider themself part of the EA community.

Again, for more detail on topic ideas, kinds of critiques, and examples: visit our longer post with resources for critiques and red teams

We don’t want to give an analogous list for topic ideas, because any list is necessarily going to leave things out. However, you might take a look at Joshua’s post outlining four categories of effective altruism critiques: normative and moral questions, empirical questions, institutions & organizations, and social norms & practices. Browsing this Forum could be a good way to get ideas if you are new to effective altruism.

Browsing this Forum (especially curated lists like the Decade Review prizewinners, the EA Wiki, and the EA Handbook) could be a good way to get ideas if you are new to effective altruism.

If you’re unsure whether something you plan on writing could count for this contest, feel free to ask us.

Additional resources

We’ve compiled a companion post, in which we’ve collected some resources for criticisms and red teaming. 

We’re also tentatively planning on running (or helping with) several workshops on criticisms and red teaming, which will be open to anyone who is interested, including people who are new to effective altruism. We hope that the first two will be in June. If you’d like to hear about dates when they’re decided, you can fill out this form.

The judging panel

The judging panel is:

No one on the judging panel will be able to “veto” winners, and every submission will be read by at least two people. If submissions are technical and outside of the panelists’ fields of expertise, we will consult domain experts. We might add more panelists if we get many submissions.

If we get many submissions or if we find that the current panel doesn’t have enough bandwidth, we may invite more people to the panel. 


Why do we think this matters? In short, we think there are some reasons to expect good criticism to be undersupplied relative to its real value. And that matters: as EA grows, it’s going to become increasingly important that we scrutinize the ideas and assumptions behind key decisions — and that we welcome outside experts to do the same.

Encouraging criticism is also a way to encourage a culture of independent thinking, and openness to criticism and scrutiny within the EA community. Part of what made and continues to make EA so special is its epistemic culture: a willingness to question and be questioned, and freedom to take contrarian or unusual ideas seriously. As EA continues to grow, one failure mode we anticipate is that this culture may give way to a culture of over-deference.

We also really care about raising the average quality of criticism. Perhaps you can recall some criticisms of effective altruism that you think were made in bad faith, or otherwise misrepresented their target in a mostly unhelpful and frustrating way. If we don’t make an effort to encourage more careful, well-informed critical work, then we may have less reason to complain about the harms that poor-quality work can cause, such as by misinforming people who are learning about effective altruism. Crucially, we’d also miss out on the real benefits of higher-quality, good-faith criticism.

In his opening talk for EA Global this year, Will MacAskill considered how a major risk to the success of effective altruism is the risk of degrading its quality of thinking: “if you look at other social movements, you get this club where there are certain beliefs that everyone holds, and it becomes an indicator of in-group mentality; and that can get strengthened if it’s the case that if you want to get funding and achieve very big things you have to believe certain things — I think that would be very bad indeed. Looking at other social movements should make us worried about that as a failure mode for us as well.”

It’s also possible that some of the most useful critical work goes relatively unrewarded because it might be less attention-grabbing or narrow in its conclusions. Conducting really high-quality criticism is sometimes thankless work: as the blogger Dynomight points out, there’s rarely much glory in fact-checking someone else’s work. We want to set up some incentives to attract this kind of work, as well as more broadly attention-grabbing work.

Ultimately, critiques have an impact by bringing about actual changes. The ultimate goal of this contest is to facilitate those positive changes, not just to spot what we’re currently getting wrong.

In sum, we think and hope: 

  1. Criticism will help us form truer beliefs, and that will help people with the project of doing good effectively. People and institutions in effective altruism might be wrong in significant ways — we want to catch that and correct our course.
    1. This is especially important in the non-profit context, since it lacks many of the signals in the for-profit world (like prices). For-profit companies have a strong signal of success: if they fail to make a profit, they eventually fail. One insight of effective altruism is that there are weaker pressures for nonprofits to be effective — to achieve the goals that really matter — because their ability to fundraise isn’t necessarily tied to their effectiveness. Charity evaluators like GiveWell do an excellent job at evaluating nonprofits, but we should also try to be comparably rigorous and impartial in assessing EA organizations and projects, including in areas where outputs are harder to measure. Where natural feedback loops don’t exist, it’s our responsibility to try making them!
    2. It’s also especially important for effective altruism, given that so many of the ideas are relatively new and untested. We think this is especially true of longtermist work.
  2. Stress-testing important ideas is crucial even when the result is that the ideas are confirmed; this allows us to rely more freely on the ideas.
  3. We want to sustain a culture of intellectual openness, open disagreement, and critical thinking. We hope that this contest will contribute to reinforcing that culture.
  4. Highlighting especially good examples of criticism may create more templates for future critical work, and may make the broader community more appreciative of critical work.
  5. We also think that people in the effective altruism network tend to hear more from other people in the network, and hope that this contest might bring in outside experts and voices. (You can see more discussion of this phenomenon in "The motivated reasoning critique of effective altruism".)
  6. We want to break patterns of pluralistic ignorance where people underrate how sceptical or uncertain others (including ‘experts’) are about some claim.

Finally, we want to frame this contest as one step towards generating high-quality criticism, and not the final one. For instance, we’re interested in following up with winning submissions, such as by meeting with winning entrants to discuss ways to translate your work into concrete changes and communicate your work to the relevant stakeholders.

What this is not about

Note that critical work is not automatically valuable just by virtue of being critical: it can be attention-grabbing in a negative way. It can be stressful and time-consuming to engage with bad-faith or ill-considered criticism. We have a responsibility to be especially careful here.

This contest isn’t about making EA look open-minded or self-scrutinizing in a performative way: we want to award work that actually strikes us as useful, even if it isn’t likely to be especially popular or legible for a general audience.

We’re not going to privilege arguments for more caution about projects over arguments for urgency or haste. Scrutinizing projects in their early stages is a good way to avoid errors of commission; but errors of omission (not going ahead with an ambitious project because of an unjustified amount of risk aversion, or oversensitivity to downsides over upsides) can be just as bad.

Similarly, we don’t want this initiative to only result in writing that one-directionally worries about EA ideas or projects being too ‘weird’ or too different from some consensus or intuitions. We’re just as interested to hear why some aspect of EA is being insufficiently weird — perhaps not taking certain ideas seriously enough. Relatedly, this isn’t just about being more epistemically modest: we are likely being both overconfident in some spots, and overly modest in others. What matters is being well calibrated in our beliefs!

We would also caution against criticizing the actions or questioning the motivations of a specific individual, especially without first asking them. We urge you to focus on the ideas or ‘artefacts’ individuals produce, without speculating about personal motivations or character — this is rarely helpful.

Contact us

Email, message any of the authors of this post via the Forum, or leave a comment on this post. 


Submissions and how they’ll be judged

  • Can I submit work I’ve already done? Yes, if it's recent. We’re accepting posts from the date of our pre-announcement (March 25, 2022) onwards.
  • Can I submit something that I got funding for already? Yes. Let us know if you have specific concerns.
  • Can I refer another person’s work? Yes. And if that person’s work wins a prize (and the author didn’t submit it themselves, and you’re the first person to refer the work), we’ll also reward you with a commission (5% of the prize). We’d love to discover work from outside the EA community that could be relevant for effective altruism. Submit referrals via this form.
  • What if I want to work on a large project for this contest that I can’t afford to carry out on my own time? Contact us. We can’t guarantee anything, but we’d like to help enable your work, by pointing you to sources of funding in effective altruism, and potentially arranging direct financial support where necessary. If we (the organizers of this contest) directly fund your work in advance, we’ll deduct whatever amount you received in advance from any potential prize that you win.
  • I have a complaint or criticism about an organization or individual, but it’s not something that’s appropriate to share publicly. You might consider contacting the CEA Community Health Team, who can advise on the next steps, including acting as an intermediary. You can also send them an anonymous message.
  • Can I submit anonymously? Yes. You can make an anonymous account on the Forum, or you can use this form to submit without posting to the Forum.
  • Do I have to already be involved in effective altruism to submit something? No, not at all. We’re actively excited to bring in external ideas and expertise. If you’re new to the Forum, the Wiki could be a good place to start to check for what has already been written. You’re welcome to make broad criticisms of effective altruism, but focused critiques that draw on your area(s) of expertise could stand an especially good chance of being entirely novel.
  • I’d love to hear what [person who’s not engaged with effective altruism] would have to say about [some aspect of effective altruism]. How can I make that happen? If you know this person, we encourage you to reach out to them! If you’re unsure or uncomfortable about contacting them directly, let us know, and we can try getting in touch.
  • Some of the panellists belong to organizations I’d like to criticize. Isn’t that an issue? All our panellists are committed to evaluating your work on its own merit — being associated with an org or project you are criticizing should and will not count as a reason to downgrade your work. Panellists will recuse themselves if they (or we) feel that a conflict of interest will inhibit their ability to fairly evaluate a particular submission. If you’re still concerned about this or would like to request that specific panellists be recused, feel free to contact us.
  • What counts as “EA”? We have in mind the ideas, institutions, projects, and communities associated with effective altruism. You can learn more at and here on the Forum.
  • Does the criticism or red teaming have to come to the conclusion that the original work was wrong? No. We’re very happy to award prizes to work of the form: “I checked the arguments and sources in this text. In fact, they check out. Here are my notes.”
  • Does my submission need to fulfill all the criteria outlined above? No. We understand that some formats make it difficult or impossible to satisfy all the requirements, and we don’t want that to be a barrier to submitting. At the same time, we do think each of the criteria are good indicators of the kind of work we’d like to see.

About the contest

  • How does this relate to Training for Good’s ‘Red Team challenge’? The Red Team Challenge is not this prize, and this prize is not the Red Team Challenge (RTC). The RTC is a program run by Training for Good which provides training in red teaming best practices and then pairs small teams of 2-4 people together to critique a particular claim and publish the results. We are very excited about the results of the programme being submitted to this contest! So this contest is a complement to the Red Team Challenge, rather than a substitute. Training for Good may also collaborate with us on workshops and [other resources].
  • Where’s the money coming from? The prizes will be awarded via the FTX Future Fund Regranting Program. The Centre for Effective Altruism is providing operational support (like coordination between judges). Note that the EA Forum is not sponsoring this prize, and isn't liable for it.
  • Doesn’t this penalize the people whose work is getting criticized? We want to encourage a norm where having your work fairly criticized is great news: an indication that it was trying to answer an important question. We want to encourage a sense of criticism being part of the joint enterprise to figure out the right answers to important questions. However, we are aware that being criticized is not always enjoyable, and some criticism is made in bad faith. If you’re concerned about being the subject of bad-faith criticism, let us know.
  • Does this mean that you think that non-critical work is less valuable than critical work? No. We just think that high-quality critical work is often under-rewarded and under-supplied — like many other kinds of non-critical work!


  • I have another question that isn’t answered in this post. Leave a comment if you suspect others might have the same question, and we’ll try to answer it here. Otherwise, feel free to contact us.

We're extremely grateful to everyone who helped us kick this off, including the many people who gave feedback following our pre-announcement of the contest.

New Comment
1 comment, sorted by Click to highlight new comments since:

If you’d prefer to post your writing outside the Forum, you can submit it via this form — we’d still encourage you to cross-post it to the Forum (although please be mindful of copyright issues).

  1. I want to make sure that everyone is aware of this option. Nobody should miss this sentence.
  2. This should be standard. Even with the form submission, I'd worry about it getting intercepted by hackers. AI, pandemics, and even global poverty have a significant geopolitical element, and even without that element the vested interests would still be very large.
  3. Public submissions can also be very good, like in the AGI rhetoric contest from last month. For example, I'd like EY to stop using the specific phrase "dying with dignity" because it's heavily associated with the assisted suicide activism, and many people will remember that those words were also used by Jim Jones, the leader of an SF-area communist cult, as he gave his final speech that manipulated 900 people into killing themselves. If EY unwittingly continues to use this super cursed phrase again, at some point between now and the end of this contest in a few months, then that would be bad, so I'd like him to stop now instead of waiting until after the entries are evaluated. That's not even an entry in the contest, I'm getting zero net money from this, please just stop.