Funding for AI alignment research

This is a linkpost for https://docs.google.com/document/d/1NIg4OnQyhWGR01fMVTcxpz8jDd68JdDIyQb0ZZyB-go/edit?usp=sharing

If you are interested in working on AI alignment, and might do full or part time work given funding, consider submitting a short application to funding@ai-alignment.com.

Submitting an application is intended to be very cheap. In order to keep the evaluations cheap as well, my process is not going to be particularly fair and will focus on stuff that I can understand easily. I may have a follow-up discussion before making a decision, and I'll try not to favor applications that took more effort.

As long as you won't be offended by a cursory rejection, I encourage you to apply.

If there are features of this funding that make it unattractive, but there are other funding structures that could potentially cause you to work on AI alignment, I'm curious about that as well. Feel free to leave a comment or send an email to funding@ai-alignment.com (I probably won't respond, but it may influence my decisions in the future).

12 comments, sorted by
magical algorithm
Highlighting new comments since Today at 11:06 PM
Select new highlight date
Moderation Guidelinesexpand_more

Note that this is (by far) the least incentive-skewing from all (publicly advertised) funding channels that I know of.

Apply especially if all of 1), 2) and 3) hold:

1) you want to solve AI alignment

2) you think your cognition is pwned by Moloch

3) but you wish it wasn't

Maybe it'd be useful to make a list of all the publicly advertised funding channels? Other ones I know of:

  • http://existence.org/getting-support/
  • https://futureoflife.org/2017/12/20/2018-international-ai-safety-grants-competition/
  • https://www.lesserwrong.com/posts/4WbNGQMvuFtY3So7s/announcement-ai-alignment-prize-winners-and-next-round
  • https://intelligence.org/mirix/
  • https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-philanthropy-project-ai-fellows-program

I noticed your comment and created this website that lists funding channels for AI alignment research. I am planning to create a new post on LW to share it after I receive some feedback.

Interesting idea! Some thoughts: You might want to think a bit more about who your target audience is. Given that applying for a job at MIRI/FHI/etc. is always another option, it's not totally clear to me to what extent "x risk funding" a natural category. One possible target audience is e.g. graduate students who are looking for research funding. One possible risk of engaging this audience people have talked about is that they might be more interested in optimizing their own career growth than actually solving x-risk-related problems. I'm not sure how worried to be about that. Another possible target audience is people who don't want to move to the Bay Area/Oxford/etc. Once you know who your target audience is, that makes marketing easier because you can market your site wherever that target audience hangs out. It might be that the most natural target audience is "people with math/CS expertise who are interested in working on the alignment problem", in which case you could expand your scope and also list things like open positions at MIRI, the AI safety reading group, lists of open problems, recent publications, etc.

This is great feedback! Thanks for taking the time to write it up.

One possible target audience is e.g. graduate students who are looking for research funding. One possible risk of engaging this audience people have talked about is that they might be more interested in optimizing their own career growth than actually solving x-risk-related problems. I'm not sure how worried to be about that.

Targeting graduate students is an excellent idea. I think the risk you mention is one worth considering. I've thought of two possible reasons why:

1) This project could end up amounting to nothing more than a waste of my time. Given how much time I currently expect to invest, I'm not very concerned with that at the moment. I haven't decided how much time I'm willing to potentially waste on this but it's a good idea to keep track of the time I'm putting into it.

2) If the project is successful, it could direct more people to these organizations to request funding. If these additional people are likely to care more about their own career growth than actually solving x-risk-related problems, this could make it more difficult to make good decisions about which applicants should receive funding. Making it so that there is a small barrier to finding out about these funding opportunities (e.g. needing to pay attention to what's happening in the LW community) might actually be a good thing. Right now, I'm not convinced that it is a good thing.

I'll continue to think about this.

Once you know who your target audience is, that makes marketing easier because you can market your site wherever that target audience hangs out. It might be that the most natural target audience is "people with math/CS expertise who are interested in working on the alignment problem", in which case you could expand your scope and also list things like open positions at MIRI, the AI safety reading group, lists of open problems, recent publications, etc.

I like the suggestions here and will make use of them.

Actionable stuff:

Target math/cs/philosophy grad students, focus on AI alignment research funding only, expand scope to include useful related information.

I'll start working on some changes to the site soon. Maybe the name should change too? I'm not sure.

There is also https://www.general-ai-challenge.org/

I might take this up at a later date. I want to solve AI alignment, but I don't want to solve it now. I'd prefer it if our societies institutions (both governmental and non-governmental) were a bit more prepared.

Differential research that advances safety more than AI capability still advances AI capability.

Out of curiosity what is the upper bound on impact?

A question: can one post multiple initial applications, each less than a page long? Is there a limit for the total volume?

I don't think I would be interested in being paid to work on this, but a long time ago I wrote about AI alignment in a story. It's about an AI that runs a clinic in a remote village in Africa. http://terasemjournals.net/GNJournal/GN0202/henson1.html

I can go into more detail on the backstory if you want it.

Keith

Hey! I believe we were in a same IRC channel at that time and I also did read your story back then. I still remember some of it. What is the backstory? :)