1147

LESSWRONG
LW

1146
AI
Personal Blog

6

Foresight Institute AI safety RFPs in automation, security, multi-agent, neuro

by Allison Duettmann
14th Jun 2025
2 min read
0

6

AI
Personal Blog

6

New Comment
Moderation Log
More from Allison Duettmann
View more
Curated and popular this week
0Comments

Foresight Institute is seeking project proposals across four AI Safety categories: 

  1. Automating research and forecasting
  2. Security technologies for AI-relevant systems
  3. Safe multi-agent scenarios
  4. Neurotech for AI safety 

This Request for Proposals builds up on our existing AI Safety Grants in these four categories by specifying the types of projects we would like to see more of in each category. 

As with the prior grants program, we plan to continue to fund ~$5M in grants annually, and accept applications quarterly, with the next deadline on June 30th. 

[Apply Now]

 

We seek proposals in the following areas:

 

Automating Research and Forecasting

  • Open-Source AI Research Agents: tools that automate key parts of scientific research – like reading papers, generating hypotheses, or designing and executing experiments – through open-source agents that can be adapted across domains.
  • Automated Forecasting Systems: systems that use AI to generate, compare, and collaborate on forecasts on critical developments – such as AI capabilities, regulation, or biosecurity risks – and present them in ways that builders and decision-makers can act on.

[Read more] 

 

Security Technologies for AI-Relevant Systems

  • AI-Augmented Vulnerability Discovery and Formal Verification: tools that use AI to automate red-teaming, detect vulnerabilities, and formally verify critical systems.
  • Provably Secure Architectures and Privacy-Enhancing Cryptography: develop provable guarantees for system behavior and scalable cryptographic infrastructure to support trustworthy AI deployment.
  • Decentralized and Auditable Compute Infrastructure: infrastructure that distributes trust, increases transparency, and enables secure AI operation in adversarial environments.

[Read more] 

 

Safe Multi-Agent Scenarios

  • AI for Negotiation and Mediation: concrete demonstrations of AI systems that help humans find common ground and reach beneficial agreements in complex negotiations.
  • Pareto-Preferred Coordination Agents: autonomous agents that can identify, negotiate, and enforce mutually beneficial arrangements between humans and other AI systems.
  • AI-Enhanced Group Coordination: AI systems that enhance collective intelligence and enable more effective group coordination around shared preferences.

[Read more] 

 

Neurotech for AI Safety

  • Brain-Aligned AI Models: proposals that use neural and behavioral data to fine-tune AI models toward safer, more human-compatible behavior.
  • Lo-Fi Emulations and Embodied Cognition: functionally grounded “lo-fi” brain emulations that simulate human-like cognition without full structural fidelity.
  • Secure and Trustworthy Neurotechnology for Human-AI Interaction: work on brain-computer interfaces (BCIs) and neurotech that augment human capabilities and enable more natural, interpretable human-AI collaboration.
  • Biologically-Inspired Architectures and Interpretability Tools: efforts to model AI architectures on biological systems and to apply neuroscience methods to make AI more transparent.

[Read more] 

 

If you have a proposal within the four main categories that falls outside the specific work we have listed, you are still welcome to apply, but we have a much higher bar for considering it. We will not consider applications which fall outside the four main areas. 

For more background on why we seek to support projects in these areas, and examples of previous work we have funded, please visit our website.

[Read More & Apply]