Strong ML engineering skills (you should have completed at least the equivalent of a course like ARENA).
What other courses would you consider equivalent?
The application link in the second-last paragraph doesn't work. I see "This form can only be viewed by users in the owner's organisation".
TLDR: We’re hiring two research assistants to work on advancing developmental interpretability and other applications of singular learning theory to alignment.
Timaeus’s mission is to empower humanity by making breakthrough scientific progress on alignment. Our research focuses on applications of singular learning theory to foundational problems within alignment, such as interpretability (via “developmental interpretability”), out-of-distribution generalization (via “structural generalization”), and inductive biases (via “geometry of program synthesis”). Our team spans Melbourne, the Bay Area, London, and Amsterdam, collaborating remotely to tackle some of the most pressing challenges in AI safety.
For more information on our research and the position, see our Manifund application, this update from a few months ago, our previous hiring call and this advice for applicants.
As a research assistant, you would likely work on one of the following two projects/research directions (this is subject to change):
See our recent Manifund application for a more in-depth description of this research.
Bonus:
Promising candidates will be invited for an interview consisting of:
1. A 30-minute background interview, and
2. A 30-minute research-coding interview to assess problem-solving skills in a realistic setting (i.e., you will be allowed expected to use LLMs and whatever else you can come up with).
Interested candidates should submit their applications by July 31st. To apply, please submit your resume, write a brief statement of interest, and answer a few quick questions here.
Join us in shaping the future of AI alignment research. Apply now to be part of the Timaeus team!