tl;dr: qualified software engineer considering what their next job might be; now thinking about direct work as a serious option.

Previous plan was something like:

SELECT * FROM big_tech_co
WHERE location = 'remote'
ORDER BY team_fit, salary

For a variety of reasons, I'm not a huge fan of this plan anymore.

New plan:

  1. Check the job pages of all the AI alignment orgs I know
  2. Check 80000 Hours jobs board in case I missed something
  3. ???
  4. Post question on LW

I didn't find anything looking at the job pages of the AI alignment orgs that I'm familiar with, and 80000 Hours didn't bring up anything that fit the bill either, so here we are.

Me:

  • mission-aligned, long-time member of the community (since ~2013) and meetup organizer (since 2017)
  • fairly strong software engineer, mostly backend-focused but have recently picked up enough React to be able to meaningfully contribute to an existing project on the front-end (no design skills to speak of, yet). Can also rapidly onboard myself to an unfamiliar codebase. Legible artifacts demonstrating these claims beyond a resume include:
  • happy to travel something like 10% of the time, especially to the Bay, to integrate professionally & socially
  • happy to do something like a part-time work trial (outside of my core working hours, but including weekends), and willing to take a few days off to do this in-person if the fit seems good
  • in general, happy to do some non-standard things that might not be expected of me at a typical tech job, be agent-y, etc., particularly to compensate for the downsides to an organization of my working remotely

You:

  • an organization that works either directly on AI alignment, or a "meta" org that e.g. better enables others to work on AI alignment
  • willing to hire someone on these (or similar) terms!
  • most everything else I'm looking for seems pretty strongly correlated with that, and I'm aware that there aren't exactly a wealth of options to filter on

Does anyone know of any orgs that I might have missed?

  • Lightcone & Redwood don't seem to be hiring remotely
  • Alignment AI doesn't specify whether they're hiring remotely (but since they're in the UK, you'd imagine they'd say if they were)
  • MIRI & ARC aren't hiring engineers

Most of the other orgs I'm familiar with seem to be doing differently-targeted work (i.e. Ought), or are doing work which seems to boil down to "capabilities advancement", but I'm open to arguments here if I've misjudged one or more of them.

New to LessWrong?

New Answer
New Comment

5 Answers sorted by

mic

Apr 28, 2022

60

The Fund for Alignment Research is a new organization to help AI safety researchers, primarily in academia, pursue high-impact research by hiring contractors. They're a group of researchers affiliated with the Center for Human-Compatible AI at UC Berkeley and other labs like Jacob Steinhardt's at UC Berkeley and David Krueger's at Cambridge. They are hiring for:

  • Research Engineer (20–40 hours/week, remote or in Berkeley, $50–100/hour) – looking for 2–3 individuals with significant software engineering experience or experience applying machine learning methods.
  • Communications Specialist and Senior Communications Specialist (10–40 hours/week, remote or in Berkeley, $30–80/hour) – communicating high-impact AI safety research. This could be via technical writing/editing, graphics design, web design, presentation development, social media management, etc.

If you have any questions about the role, please contact them at hello@alignmentfund.org.

Appreciate the recommendation.  Around April 1st I decided that the "work remotely for an alignment org" thing probably wouldn't work out the way I wanted it to, and switched to investigating "on-site" options - I'll write up a full post on that when I've either succeeded or failed on that score.

On a mostly unrelated note, every time I see an EA job posting that pays at best something like 40-50% of what qualified candidates would get in the industry, I feel that collide with the "we are not funding constrained" messaging.  I understand that ther... (read more)

1wassname1y
For what it's worth I was in a similar boat, I've long wanted to work on applied alignment, but also stay in Australia for family reasons. Each time I changed job I've made the same search as you, and ended up just getting a job where I can apply some ML to industry. Just so that I can remain close to the field. For all the call for alignment researchers, most org's seem hesitant to do the obvious thing which would really expand their talent pool. Which is open up to remote work. Obviously they struggle to manage and communicate remotely, which prevents them from accessing a larger and cheaper pool of global talent. However they could accelerate alignment by merely supplementing with remote contractors or learning to manage remote work.
3RobertM1y
For what it's worth, I've updated somewhat against the viability of remote work here (mostly for contingent reasons - the less "shovel-ready" work is, the more of a penalty I think you end up paying for trying to do it remotely, due to communication overhead).  See here for the latest update :)

Xodarap

Mar 14, 2022

30

We (the Center for Effective Altruism) are hiring Full-Stack Engineers. We are a remote first team, and work on tools which (we hope) better enable others to work on AI alignment, including collaborating with the LessWrong team on the platform you used to ask this question :)

Interesting, was this recently posted?  Do you mind if I DM you with some questions?

1Xodarap2y
Sure, feel free to DM me.

Yonatan Cale

Jun 11, 2022

20

Anthropic will want you to be in their office at California for at least 25% or so of the time (based on one discussion with them, please correct me if you learn otherwise)

Yonatan Cale

Jun 11, 2022

20

Have you considered CEA? Not a perfect fit, but they're remote-first and I personally think they help with alignment research indirectly by building the EA community and improving lesswrong.com as well (they use the same code). It's really important, I think, for these places to be (1) inviting, (2) promote good complicated (non toxic) discussions, and (3) connect people to relevant orgs/people, including to AI Safety orgs.

Again, not sure this is what you're looking for. It resonates with me personally

mic

Mar 02, 2022

20

I'm curious why you think Ought doesn't count as "an organization that works either directly on AI alignment, or a 'meta' org that e.g. better enables others to work on AI alignment". More on Ought

It might be worth a shot to quickly apply to speak with 80,000 Hours and see if they have any suggestions.

Fathom Radiant, an ML hardware supplier, is also hiring remotely. Their plan is apparently to offer differential pricing for ML hardware based on the safety practices, in order to help incentivize safer practices and help safety research. I'm not totally sold but my 80,000 Hours adviser seemed like a fan. You can speak with Fathom Radiant to learn more about their theory of change.

I'm not particularly sold on how Ought's current focus (Elicit) translates to AI alignment.  I'm particularly pessimistic about the governance angle, but I also don't see how an automated research assistant is moving the needle on AI alignment research (as opposed to research in other domains, where I can much more easily imagine it being helpful).

 

This is possibly a failure of my understanding of their goals, or just of my ability to imagine helpful ways to use an automated research assistant (which won't be as usable for research that advances ... (read more)