This is a linkpost for

FHI is starting a new two-year research scholars program, that I think could be quite exciting to a bunch of people on LW. The deadline to apply is in about two weeks (11th of July), and they're still actively looking for applicants.

FHI has a long history of doing interesting research (many of the core ideas of this site have originated there), and I think there is a good chance that in terms of training epistemic rationality and being given the opportunity to directly tackle many of the key questions around existential risk, AI alignment and theoretical rationality this might be the best career opportunity that currently exists out there for most people.

My guess is that if you are a long-time LessWrong reader who has been waiting for an opportunity to make a more direct impact on AI-risk or other issues frequently covered on this site, then this might be the best opportunity you will get for a good while.

My guess about the people who should most consider applying:

  • You just finished a degree, considered working in academia, but so far haven't found any opportunities to work on the important things you care about
  • You are currently in a career that does not, and probably won't, allow you to directly make progress on the problems that seem most important to you
  • You have some kind of research project that you've been wanting to work on for a long time, but are worried about not having financial stability or a productive research environment in which to work on that project

Here is a small blurb that Owen (the main organizer behind the project) sent me a while ago when I asked him what he usually sends people:

Which is more likely to be radically transformative in the next two decades: AI or biotechnology? Can we draw lessons from the industrial revolution or the development of nuclear weapons? What might humanity achieve on a timescale of millions of years, and does that matter for decisions today? How would the (non-)existence of intelligent aliens affect us?
If you find these questions compelling, you might be interested in joining the Future of Humanity Institute. Our new Research Scholars Programme will employ a small number of talented, curious thinkers to explore such topics, giving mentorship to help them develop judgement about what to work on from the perspective of securing a flourishing future. For details, see
New Comment
2 comments, sorted by Click to highlight new comments since:

Yeah, this seems especially good for folks who want the affordance to continue within academia, but haven't yet been able to build expertise on the problems that seem most important, and will otherwise likely be shunted into the favoured topics of the available PhD supervisors. This is a place where you can build expertise about something important, and if you complete a project others will be able to see that.

Also, I have had the repeated experience of seeing competent people shy away from applying to things before they know that they're definitely a good fit, or whether the program/project can tailor their needs, or whether the program/project has enough people already. In general applying is cheap and will give you more information, while not applying rarely gives you more info. So I hope folks err on the side of applying.

As a data point for why this might be occurring. I may be an outlier, but I've not had much luck getting replies or useful dialogue from X-risk related organisations in response to my attempts at communications.

My expectation, currently. is that if I apply I won't get a response and I will have wasted my time trying to compose an application. I won't get any more information than I previously had.

If this isn't just me, you might want to encourage organisations to be more communicative.