Foresight Institute has received funding to support projects in three AI safety areas we think are potentially under-explored:

1. Neurotechnology, brain computer interface, whole brain emulation, and "lo-fi" uploading approaches to produce human-aligned software intelligence

2. Computer security, cryptography, and related techniques to help secure AI systems

3. Multi agent simulations, game theory and related techniques to create safe multipolar AI scenarios that avoid collusion and foster positive sum dynamics

The grant application process is now open and accepts rolling applications. We expect to grant between $1 - 1.2 million per year across projects and look forward to receiving your project proposals that could make a significant difference for AI safety within short timelines. 

Please visit https://foresight.org/ai-safety/ for full details and application instructions. 

I would appreciate it if you would consider sharing the opportunity with others who may benefit from applying.

Feel free to comment here or email me at allison@foresight.org with any questions, feedback, or ideas for collaborations.

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 6:33 PM
[-]Dan H8mo157
  1. Neurotechnology, brain computer interface, whole brain emulation, and "lo-fi" uploading approaches to produce human-aligned software intelligence

Thank you for doing this.

Sounds right up my alley. I'm excited to apply.