I don't pretend to speak for anyone but myself, but as someone who is strongly anti-war, and who is open about standing for the values of peace, I strongly disagree with this post and don't think that this sort of thing should be welcomed in any kind of AI safety forum.
Militaries are effectively the worst possible actors when it comes from the standpoint of misuse risk and the creation of AI that is dangerous by design. This is a known problem, which this post seemingly at best ignores and at worst deceptively misconstrues as a non-issue.
To use a rough analogy, this post feels like a petrochemical company making a visit to an environmental conference to solicit participants for developing more "environmentally friendly" petroleum extraction techniques. Sure, the petrochemical company could no doubt benefit from the expertise at that venue. But, is that really what the venue is for?
In the case of LW, is this really a forum where the military should be posting RFPs for military AI alignment? Because what is the point of military AI alignment, except for being able to create AI that is as dangerous as possible, and therefore always pushing the extreme boundaries of safety profiles? Again, to make another analogy, this is like someone coming onto a 3-D printing forum and looking to pay people to implement their vision for weapons printing. It should immediately raise serious concerns of all sorts.
I think this post makes it perfectly clear, for me personally, that narrowly-construed AI alignment (e.g., alignment to militaries) cannot and must not be the goal of those who wish to advocate for AI safety as a serious objective. At least, for some someone like me (strongly anti-war), alignment is not enough, and the AI must also be endowed with strong and firm moral principles, like the values of peace and lawful behavior.
Of course, it goes without saying that what the US is doing right now in Iran, Venezuela, and other places—and which DARPA is necessarily a part of—is an egregious violation of international law, and it makes this post particularly problematic, right now, especially without any kind of attempt to acknowledge moral problems and pitfalls.
Like I said, I'm really not one to speak on the behalf of others here, and perhaps even a majority of other participants here disagree with my views. Nonetheless, I hope this might be helpful as an articulation of an opposing view.
AICRAFT: DARPA-Funded AI Alignment Researchers — Applications Open
TL;DR: We hypothesize that most alignment researchers have more ideas than they have engineering bandwidth to test. AICRAFT is a DARPA-funded project that pairs researchers with a fully managed professional engineering team for two-week pilot sprints, designed specifically for high-risk ideas that might otherwise go untested. We will select 6 applicants and execute a 2 week pilot with each, the most promising pilot may be given a 3 month extension. This is the first MVP for engaging DARPA directly with the alignment community to our knowledge, and if successful can catalyze government scale investment in alignment R&D. Apply here.
Applications close March 27, 2026 at 11 PM PST.
What is AICRAFT?
AICRAFT (Artificial Intelligence Control Research Amplification & Framework for Talent) is a DARPA-funded seedling project executed by AE Studio. The premise is straightforward: we hypothesize that alignment research could progress faster if the best researchers had more leverage. We believe that researchers currently are bottlenecked on either execution (i.e. they are doing the hands-on experiments themselves) or management (i.e. they are managing teams that are executing the work). Management is higher leverage but what if we could push that much further. AE Studio has been running a model where we pair researchers with fully managed ML teams, allowing the researcher to spend as little as 45 minutes per week with our team. Without the execution and management burden, this model provides a new outlet for research ideas that would have otherwise gone untested.
The U.S. pool for AI/ML engineering is much larger than the talent pool for AI alignment. If experts in alignment can effectively scale their capacity with general-purpose AI/ML engineering talent, that unlocks a much larger pipeline of alignment research than the field currently supports.
AICRAFT tests this by pairing researchers directly with an experienced engineering team for focused two-week sprints. The goal is to get initial signal on ideas that wouldn't otherwise get tested. If successful, the most promising ideas may have an opportunity to expand to a 3 month engagement.
We will select 6 researchers and execute a 2 week research sprint with each. The purpose of the sprint is to get signal on a high-risk idea, or to prove it wrong quickly.
The Bigger Picture
DARPA has already set a goal to achieve military-grade AI. This was announced recently by our CEO in the Wall Street Journal. What makes that relevant to alignment? Military deployment requires reliability guarantees that deceptively aligned or unpredictably behaving systems simply can't meet. You can't field an AI system that pursues hidden objectives or behaves differently under distribution shift. In that sense, the DoD's requirements create a concrete, well-funded forcing function for alignment research outcomes, even if the framing and vocabulary differ from what you'd see on the Alignment Forum.
AICRAFT is the first direct engagement between DARPA and the alignment research community. If the pilots demonstrate that this model works, it builds the case for substantially larger government investment in alignment R&D, the kind of scale that grants and private philanthropy alone can't reach.
This may be the most important and highest leverage research engagement you have all year as it can catalyze large scale government investment in alignment R&D.
Who should apply?
We're especially interested in researchers who have ideas that don't have other outlets. Maybe you have 10 ideas but bandwidth to pursue 2-3. Maybe there's a high-risk hypothesis that isn't a good fit for a grant or isn't supported by your current employer, but is worth getting early signal on.
If you have a testable hypothesis in AI control, alignment, or interpretability and can articulate what signal you'd look for in two weeks then we want to hear from you.
How it works
You bring (~2 hours/week):
We deliver (60+ hours of execution):
After the pilot:
You receive a final report with documented results. Promising pilots are recommended to DARPA for a 3-month extended engagement, contingent on your availability.
The application
The application is intentionally lightweight: it takes under 10 minutes. The core of it is a 500-word research abstract addressing three questions:
Selected applicants will be invited to a brief follow-up call to talk through the idea and answer questions about the program. All applicants will be notified of final decisions by late April.
FAQ
How much time commitment is this? Just four hours! You’ll spend two hours per week for the two-week pilot. This includes an initial planning session, async updates during the sprint, and demo sessions at the end of each week.
Can I participate if I'm affiliated with a university or company? Yes, if you can enter a subcontractor agreement with AE Studio. Most institutions have straightforward consulting processes. The one hour per week commitment typically falls within standard outside activity policies.
What compute and resources are available? Cloud compute from AWS, GCP, Azure, and specialized ML platforms. API access to frontier models for evaluations, synthetic data generation, and related tasks.
What happens after the two-week pilot? You receive a final report with documented results. Strong pilots may be recommended for a 3-month extended engagement, contingent on your availability.
Is there compensation? Yes, researchers receive a $1,000 stipend for approximately 4 hours of work over the 2-week period.
What is the selection process? We review applications after the deadline, invite promising applicants to a brief call, and notify all applicants of final decisions by early-mid April.
Apply here — applications close March 27, 2026 at 11 PM PST.
AICRAFT is funded by DARPA and executed by AE Studio. The views, opinions, and findings contained herein are those of the authors and should not be construed as representing official policies or endorsements of DARPA or the U.S. Government.