Want to increase the odds that humanity correctly navigates whatever risks and promises artificial intelligence may bring? Interested in spending this summer in the SF Bay Area, working on projects and picking up background with similar others, with some possibility of staying on thereafter? Want to work with, and learn with, some of the best thinkers you'll ever meet? – more specifically, some of the best at synthesizing evidence across a wide range of disciplines, and using it to make incremental progress on problems that are both damn slippery and damn important?
If so, drop us an email. Show us your skills; give us a chance to jointly brainstorm what you might be able to do.
We are particularly interested in people who have *any* of the following traits:
- Dazzling brilliance at math or philosophy;
- A history of successful academic paper-writing; strategic understanding of journal submission processes, grant application processes, etc.
- Strong general knowledge of science or social science, and the ability to read rapidly and/or to quickly pick up new fields;
- Good interpersonal skills, writing skills, and/or marketing skills;
- Organization, strong ability to keep projects going without much supervision, and the ability to get mundane stuff done in a reliable manner;
- Skill at implementing (non-AI) software projects, such as web apps for interactive technological forecasting, rapidly and reliably;
- A history of successfully pulling off large projects or events;
- Unusual competence of some other sort, in some domain we need, but haven’t realized we need.
The only musts are that you be capable, rational, and interested in helping reduce existential risk.
If you’re interested, send an email to annasalamon at gmail dot com, who will be doing the first-pass screening. Include:
- Why you’re interested;
- What particular skills you would bring, and what evidence makes you think you have those skills (you might include a standard resume);
- Optionally, any ideas you have for what sorts of projects you might like to be involved in, or how your skillset could help us improve humanity’s long-term odds.
Our application process is fairly informal, so send us a quick email as initial inquiry and after some correspondence we can decide whether or not to follow up with more application components.
(Background on where we're coming from: SIAI is currently seeing who's out there and brainstorming possibilities (however, it now looks like a summer project likely will go forward). If you're part of who's out there, do let us know. Plausible projects include:
- Improving technological forecasting around AI (with wide probability intervals, attention to the heuristics and biases literature, etc.);
- Writing academic conference/journal papers to seed academic literatures on questions around AI risks (e.g., takeoff speed, economics of AI software engineering, genie problems, what kinds of goal systems can easily arise and what portion of such goal systems would be foreign to human values; theoretical compsci knowledge would be helpful for many of these questions);
- Helping construct and/or test useful rationality curricula;
- Other activities that further our or relevant other actors' understanding of what humanity is up against or how to address it -- either directly, by research and writing on the topics themselves, or indirectly, by improvements in our individual or collective rationality.)
(This post is specially exempted from the "no AI discussion until after April" ban because it is time-urgent.)
ETA: Fluency in economics would also be a plus. (But don't feel like you need all the traits. Rationality, general competence, and unusual skill in one of the above is fine. Special consideration if you're young and have indicators of promise, though for the most part we're looking for people who are older and have actual past success.)