**UPDATE:** The position is now closed. My thanks to everyone who applied, and also to those who spread the word.

The Association for Long Term Existence and Resilience (ALTER) is a new charity for promoting longtermist^{[1]} causes based in Israel. The director is David Manheim, and I am a member of the board. Thanks to a generous grant by the FTX Future Fund Regranting Program, we are recruiting a researcher to join me in working on the learning-theoretic research agenda^{[2]}. The position is remote and suitable for candidates in most locations around the world.

Apply here.

# Requirements

- The candidate must have a track record in mathematical research, including proving non-trivial original theorems.
- The typical candidate has a PhD in theoretical computer science, mathematics, or theoretical physics. However, we do not require the diploma. We do require the relevant knowledge and skills.
- Background in one or several of the following fields is an advantage: statistical/computational learning theory, algorithmic information theory, computational complexity theory, functional analysis.

# Job Description

The researcher is expected to make progress on open problems in the learning-theoretic agenda. They will have the freedom to choose any of those problems to work on, or come up with their own research direction, as long as I deem the latter sufficiently important in terms of the agenda's overarching goals. They are expected to achieve results with minimal or no guidance. They are also expected to write their results for publication in academic venues (and/or informal venues such as the alignment forum), prepare technical presentations et cetera. (That said, we rate researchers according to the estimated impact of their output on reducing AI risk, not according to standard academic publication metrics.)

Here are some open problems from the agenda, described very briefly:

- Study the mathematical properties of the algorithmic information-theoretic definition of intelligence. Build and analyze formal models of value learning based on this concept.
- Pursue any of the future research directions listed in the article on infra-Bayesian physicalism.
- Continue the study of reinforcement learning with imperceptible rewards.
- Develop a theory of quantilization in reinforcement learning (building on the corresponding control theory).
- Study the overlap of algorithmic information theory and statistical learning theory.
- Study infra-Bayesian logic in general, and its applications to infra-Bayesian reinforcement learning in particular.
- Study the behavior of RL agents in population games. In particular, understand to what extent infra-Bayesianism helps to avoid the grain-of-truth problem.
- Develop a theory of antitraining: preventing AI systems from learning particular domains while learning other domains.
- Study the infra-Bayesian Turing reinforcement learning setting. This framework has applications to reflective reasoning and hierarchical modeling, among other things.
- Develop a theory of reinforcement learning with traps, i.e. irreversible state transitions. Possible research directions include studying the computational complexity of Bayes-optimality for finite state policies (in order to avoid the NP-hardness for arbitrary policies) and bootstrapping from a safe baseline policy.

# Terms

The position is full-time, and the candidate must be available to start working in 2022. The salary is between 60,000 USD/year to 180,000 USD/year, depending on the candidate's prior track record. The work can be done from any location. Further details depend on the candidate's country of residence.

Personally, I don't think the long-term future should override every other concern. And, I don't consider existential risk from AI especially "long term" since it can plausibly materialize in my own lifetime. Hence, "longtermist" is better understood as "important

*even*if you*only*care about the long-term future" rather than "important*only*if you care about the long-term future". ↩︎The linked article in not very up-to-date in terms of the open problem, but is still a good description on the overall philosophy and toolset. ↩︎

How did you choose the salary range?

The point of reference was salaries of academics in the US, across all ranks.

If you could choose anyone to work on this, who would you choose?

I dunno, maybe Maria-Florina Balcan or Constantinos Daskalakis?

Assuming this is serious, have you reached out to them?

The salary offer is high enough that any academic would at least take the call. If they're not interested themselves, you might be able to produce an endowment to get their lab working on your problems, or at a bare minimum, get them to refer one or more of their current/former students.

The salary is not that high. If Costis or Nina earn less than $150,000 USD/year, I will eat my hat. $200k is more likely. Also, their job comes with tenure (and access to the world's top graduate students), and you're unlikely to get them to quit it.

(It is true that they might refer some of the open problems to their graduate students, though.)

Academics not willing to leave their jobs might still be interested in working on a problem part-time. One could imagine that the right researcher working part-time might be more effective than the wrong researcher full time.

Please feel free to repost this elsewhere, and/or tell people about it.

And if there is anyone interested in this type of job, but is currently still in school or for other reasons is unable to work full time at present, we encourage them to apply and note the circumstances, as we may be able to find other ways to support their work, or at least collaborate and provide mentorship.

But even in the case of still being in school, one would require the background of having proved non-trivial original theorems? All this sounds exactly like the research agenda I'm interested in. I have a BS in math and am working on an MS in computer science. I have a good math background, but not at that level yet. Should I consider applying or no?

For this position, we are looking for people already able to contribute at a very high level. If you're interested in working on the agenda to see if you'd be able to do this in the future, I'd be interested in chatting separately and looking at whether some form of financial support or upskilling would be useful, and look at where to apply for funding.

I have a BS in mathematics and MS in data science, but no publications. I am very interested in working on the agenda and it would be great if you could help me find funding! I sent you a private message.

Cool, makes sense. I was planning on making various inquiries along these lines starting in a few weeks, so I may reach out to you then. Would there be a best way to do that?

Nope, find me online, I'm pretty easy to reach.

How does this relate to this job offer? Is this a second job or the same job with requirements clarified? Should I give up on this job now if I don't have publications?

It is a completely different job, with different requirements, different responsibilities and even different employers (the other job is at MIRI, this job is at ALTER).

When do applications close?

When are applicants expected to begin work?

How long would such employment last?

There is no particular deadline, it will be my judgment call based on distribution of applications over time and quality. I expect the position to remain open for no less than 2 weeks and no more than 6 months, but it's hard to say anything more specific atm.

We are flexible about this: if an applicant needs several months to complete other commitments, it is perfectly acceptable.

Until we either solve AI alignment or the AI apocalypse comes :)

(Or, the employment is terminated because one of the parties is unsatisfied, or we run out of funding, hopefully neither will happen.)

If someone wanted to work out if they might be able to develop the skills to work on this sort of thing in the future, is there anything you would point to?

If you're interested, I'd start here: https://www.alignmentforum.org/posts/YAa4qcMyoucRS2Ykr/basic-inframeasure-theory and go through the sequence. (If you're not comfortable enough with the math involved, start here first: https://www.lesswrong.com/posts/AttkaMkEGeMiaQnYJ/discuss-how-to-learn-math )

And if you've gone through the sequence and understand it, I'd suggest helping developing the problem sets that are mentioned in one of the posts, or reaching out to me.

Thanks, I'll see how that goes, assuming I get enough free time to try this.

Application form is closed. Can this be marked in the title?