This is good advice, and is not specific to AI safety or "short timelines". In every field I've ever been part of, the people who show up and start doing the stuff that they see needs doing end up accomplishing more than the people who think in terms of filling open positions and sending applications.
If someone is hiring for the work you want to do, or is offering a fellowship that seems like it'll help you get where you want to go, then by all means apply. But the difference in mindset between people who see that stuff as the default, vs those who see it as instrumental towards their own plans, is very big.
Highly agree with the importance of starting to do work if you want to enter a field.
But I’d highlight that most people can also find a few (e.g. 4-8) hours each week alongside a job (as already mention in other comments) so I’d encourage to think about these separately:
The alternative to the latter like EtG, career capital, acces to more mentoring through a job (and potentially even to the former as well) could ve strong for some.
See also: if you aren't financially stable, rather than "earn to give", "earn to get sufficiently wealthy you can afford to not have a job for several years while working on AI stuff".
BTW, I think "financial stable" doesn't mean "you can technically survive awhile" it's "you have cushion that you will not feel any scarcity mindset." For almost everyone I think this means at least 6 months more runway than you think you plan to use, and preferably more like a year.
(Note, AI automation might start doing wonky things to job market by the time you're trying to get hired again, if you run out of money)
I also don't really recommend people try to do this as their first job. I think there's a collection of "be a competent adult" skills that you probably don't have yet right out of college, and having any kind of job-with-a-boss for at least like 6 months is probably valuable.
Followup thought: there's a lot you can do as a side hustle. If you can get a job that you don't care that much about but pays well, and you don't have enough money to quit with 3+ years of runway (i.e. 2+ for Doing Stuff and 1 for figuring out how to have more money)...
...that doesn't mean "don't do anything", it means "find things to do that are motivating enough you can do them in evenings/weekends and start building some momentum/taste. (This may also later help you get a AI safety job.)"
Most people who end doing jobs that they love / are meaningful to them find some way to pursue it during their spare time while they have a Real Job.
Yes! If you have reasonable takes and taste and agency, saving up to self-fund is usually the most cost effective donation you could find.
Could you please elaborate on this? I was also considering self-funding but could not find a good justification so far.
Giving money goes through several layers of reduced effectiveness and inefficiency. It's good as a fallback and self-signal, but if you can find and motivate yourself to do worthwhile things yourself you can do much more with much less money.
This seems unlikely to me. It seems like I should be way more willing to fund someone else to work on safety now than myself multiple years in the future.
The discount rate on future work and funding is pretty high compared to work now. Also, the other person was able to get funding from a grant-maker, which is a positive signal compared to myself, who wasn't.
I think depends on how much you believe in yourself vs other specific people. I agree with funding other specific people you believe in.
What this means in practice is that the "entry-level" positions are practically impossible for "entry-level" people to enter.
This problem in and of itself is extremely important to solve!
The pipeline is currently: A University group or some set of intro-level EA/AIS reading materials gets a young person excited about AI safety. The call to action is always to pursue a career in AI safety and part of the reasoning is that there are very few people currently working on the problem (it is neglected!). Then, they try to help out but their applications keep getting rejected.
I believe we should:
Definetly yes to more honestly!
However, I think it's unfair to describe all the various AI safery programs as "MATS clones". E.g. AISC is both order and quite diffrent.
But no amount of "creative ways to bridge the gap" will solve the fundamental problem, because there isn't a gap realy. There isn't lots of senior jobs, if we could only level up people faster. The simple fact is that there isn't enough money.
Is money really the bottleneck? It seems to me that the distribution of senior mentors to entry-level people is more of a bottleneck. Also, funneling the right people into the right projects is a slower process than funding. Please let me know if my intuition seems to be off here.
Identify the risk scenario you'd most like to mitigate, and the 1-3 potentially most effective interventions.
this is actually hard, and where I stumble. for me the whole thing seems too owerwhelming to have a perference.
do you have any specific examples? what are the scenario(s) that drive your efforts?
I spent a long time spinning my wheels calculating the "scenarios i'd most like to mitigate" and the "1-3 potentially most effective interventions" before noticing that not specializing was slowly killing me. Agree that this is the hard part, but a current guess of mine is that at least some people should do the 80/20 by identifying a top 5 then throwing dice.
I have to make a post about specialization, the value it adds to your life independent of the choice of what to specialize in, etc.
Looking forward to reading this in the future! I'd like to add though that some people really enjoy being generalists, wearers of many hats, and Jacks of all trades, ready to switch tasks/projects/cause areas if priorities seem to change.
For sure, it's hard, and also perhaps the most crucial part if you wanna feel like you're doing the most effective thing you could be doing.
When I started this process after graduating ~9 months ago, I spent the first ~2 months mainly reading, trying to get a broad general understanding of what's going on and why. I had already established that I cared the most about preventing extinction scenarios.
It might help simplify to go the Ngo route of batching risks together and identifying the intervention that would mitigate the whole bunch at once.
Personally, it took me ~2 months to come to the conclusion that an international agreement/treaty on red lines might be the single most high EV, reasonable, no-brainer intervention out there. Have explored other options but always returned to this one. Would like to see folks shoot for the moon and honestly try to make something like this happen 2026-2027.
Thank you for adding that last part, "Have a sufficient financial safety net".
Not everyone can afford to leave paid work and just do AI Safety work for free / in favour of unstable short fellowships.
It's harsh but it's the reality. If you have dependants (children, or other family members) or have massive student loans to pay every month, and have zero financial support from anyone else...
I'd say: still dedicate as much time as possible to your research. Like OP says- more time to actually doing the work than on applications.
But I've seen people get pressured into quitting their work or studies and made feel like they're not committed enough if they don't. When it turns out they'd be putting their entire family in financial duress if they did so.
And to those who can afford not to work for a salary: please remember to be kind and prudent. Some of us also have short timelines, but we'd like our families to have food on their plate too.
P.S.: in case it isn't obvious, I do support the message on this post.
I would recomend thay anyone with dependents, or any other need for economic stability (e.g. lack of safety net from your family or country) should focus on erning money.
You can save up and fund yourself. Or if that takes too long, you can give what you can give 10% (or what ever works for you) to support someone else.
to be clear: this strategy is also problematic if you hope to have dependents in the future (i.e. are dating with family-starting intentions). its a selection effect on the partners you can realistically have relationships with.
source: pain and suffering.
Agreed! Thank you, Linda. For readers: this also goes for students who do not have support from family members or whose families have a quite precarious situation- quitting your degree may leave you with thousands in debt that you'll still have to repay.
For undergrad students in particular, the current university system coddles. The upshot is that if someone is paying for your school and would not otherwise be paying tens of thousands of dollars a year to fund an ai safety researcher, successfully graduating is sufficiently easy that its something you should probably do while you tackle the real problems, in the same vein as continuing to brush your teeth and file taxes. Plus you get access to university compute and maybe even advice from professors.
the current university system coddles
No doubt true in many cases, but I would assume this to depend on exactly which country, university, degree etc. we were talking about?
Yeah, if you are doing e.g. a lab heavy premed chemistry degree my advice may not apply to an aspiring alignment researcher. This is absolutely me moving the goalposts, but may also be true: on the other hand, if you are picking courses with purpose, in philosophy, physics, math, probability, comp sci: theres decent odds imho that they are good uses of time in proportion to the extent that they are actually demanding your time.
Yep, for sure put your own oxygen mask on first, this field doesn't need more people burning the last of their slack and having a bad time/losing the ability to aim properly.
This post is for people who want to help and have slack, but feel they need permission.
This is a good addition! However, I'm assuming that if the advice given in this post applies to you, you're already using a nontrivial amount of time on applications, which you're not getting paid for anyway. So might as well trade that time for volunteering or working on your own project. [1]
Also I should make explicit a fundamental underlying assumption I have, which is that if you do a great job as a volunteer, you're more likely to get hired or funded [2], than if you were starting the application process from scratch.
Orgs spend significant resources in hiring people, and I'm sure many would be happy to skip the recruitment process and just hire the person they've already seen in action, or is already doing the job for them. I personally believe most AIS orgs act in good faith and want to pay the people who do valuable work for them if they have the financial means to do so.
Probably should be added here that if you have good reason to believe you're likely to be accepted to a position, I wouldn't discourage you from applying - this is definitely not the purpose of this post.
...for either a) what you're already doing, b) a different position in the same org/project, or c) a position in another org/project because now you have relevant experience in your CV.
I'm not sure. I see how it could be helpful to some applicants, but in the context of that particular interaction, it feels interpretable as “we're not going to fund you; you should totally do it for free instead”. Something about that feels off, in the direction of… “insult”? “exploitative”? maybe just “situationally insensitive”?—but I haven't pinned down the source of the feeling.
Hmm reasonable counterargument, I may become convinced after pondering. In this context, my mental representative[1] of "put it in the email" says:
Sure, I'll bite that bullet. Put something to this effect:
We understand that not getting funded or mentored is frustrating. Unfortunately, we have no money, no researchers, and more applicants than there are grains of sand on earth. Don't wait around for someone to fund you, the world needs bootstrappers. If you're sufficiently motivated and can self-fund such that you have time to {study, research, policy, activism, genius weird ideas, wording unfinished - should be single word covering all these cases}, consider volunteering your thinking and agency to whatever gap you can identify. We hope people doing good work somehow get retroactive funding, though this is not on the horizon.
Basically I want these funding and mentoring opportunities to be apologetic about their very existence being near-unavoidably misleading. I want this as part of an ensemble of small tweaks to hopefully prepare to not depend on official connections to find people who can contribute. There are a lot of brilliant insights not being attempted because people aren't getting mental hero licenses, (though the literal phrase hero license is a bad prompt for many). They see opportunity, apply, get discouraged about the whole thing, because they're humans. I want the rejection to pump them up to sit down and think about the problem end to end as part of open community. probably also needs links to some tsvi/wentworth/wei dai getting-started posts.
edit: and I agree with Katalina downthread that this needs to also be written so that it is clear it only asks for what makes sense to the reader. I mean it to be a call to agency.
Mental representative: whatever ensemble in my head that represents a thing as a coherent simulator of that thing. Subcircuit which has temporarily allocated to represent something I encounter. Compare "representation", which can include representations too low resolution to qualify as being a representative.
Basically I want these funding and mentoring opportunities to be apologetic about their very existence being near-unavoidably misleading.
Super agree! I wanna say something like "please respect the time of people who are doing everything in their power to have a maximally positive impact in the world". Bottom line: It's unethical to waste a fellow altruist's time.
I'd really like to see orgs change how they describe positions to candidates. E.g. being way more transparent about how many applicants they've had in the past, what are the deciding factors in their selection, and whether it's actually even remotely possible that they would consider someone without a relevant visa/residency, or someone who doesn't meet the exact preferred qualifications listed.[1]
I'm saying this while keeping in mind that it's hard to predict the quality and quantity of applicants, but my broad intuition still is that common sense alone should nudge towards this direction.
An example of a win-win would be for orgs to use extremely brief screening forms at the first application stage, ones that would actually take a person <15 minutes to fill out, and didn't leave any room for overcompensation.[2]
"Even if you don't meet all the requirements, we still strongly encourage you to apply" does not give a clear enough idea of the probability or circumstances under which this would be possible. I find it a bit unethical at this stage to be signaling fake inclusivity when the truth is more like "yeah, on paper, you don't need to have experience, but in practice, there's a <1% chance we'll select you if you don't".
Like don't ask for a CV - that way you prevent people from wasting their time tailoring one for the position.
This isn't for people who are in the field for a job, this is for the people who want everyone to not die and want to step up. It's not a requirement for being a moral person, you don't have to dive in to save a drowning child. But in a field where it's not someone earning money off your back but you producing a common good you want... yeah, I think it's good to do things for free if you're able.
And before anyone calls me a hypocrite: I have worked full time for the past five years trying to reduce doom. I have never received a salary, and burned down the majority of my runway, even being efficient. This seems acceptable, I don't need my tokens after the singularity, and not taking jobs leaves me free to be properly agentic.
Have a sufficient financial safety net
I think, this condition is important only if I am going to leave my full-time job and switch to unpaid AI Safety projects. For some people (who have financial security), this may be the case. Many, including myself, do not have this security. It does not mean I can't do any projects until I get enough funds to survive. Rather, it means that I can do only part-time projects (for me, it was organising mentoring programs and leading AI Safety Camp project). Meanwhile, I still think applying to the roles that seem to be a good fit for me makes quite a lot of sense - I would rather spend 40 hours/week working on AI Safety than on a regular job. Maybe it should be something like 80% projects, 20% applying (the numbers are random).
I feel that the percentage of people who can afford not to have paid work and only do AI Safety projects till AGI arrives is not that high. It would be nice to have also a strategy and recommendations, what a person can do for AI Safety with 10 hours/week, or 5, or even 1. I think the boundary where one can do something useful is quite low - even with 5 minutes/week they can e.g. repost stuff in social networks.
I'd consider a job which leaves you slack too do other things as a reasonable example of a financial safety net. Or even the ability to reliably get one if you needed it. Probably worth specifying in a footnote along with other types of safety net?
The soundness of this advice depends a bit on what career path you want to pursue, though. If you want to do some lobbying or policy advocacy, it's pretty difficult to "just get to work" if you don't have the right network, skill set, and credentials. And working in that area without knowing what you're doing can also be quite harmful.
I understand where you're coming from, and agree that you should be very mindful of how your ignorance might create blind spots in your EV calculations, but really the kind of "work" that I'm referring to could also be extremely low-cost and low-risk. Example: You might spot mistakes or opportunities for improvement on a website of an org that is doing important work, and offer to help fix those.
Yeah but that doesn't really change the general argument made by sanyer here. I do have some networks, trivial skills and credentials, but I could also be an incompetent noob who's like a chimp with a hammer thinking they're fixing things.
I guess the crucial difference here is that I do my best to a) learn from other people, b) go for short feedback loops rather than shooting off my own high-cost, high-risk tangents, and c) elicit, actually listen to, and carefully assess feedback from various directions. All this while examining in detail the underlying assumptions behind people's reasoning which have nothing to do with experiences/credentials.
Identifying as an impostor has really been beneficial here I guess. I'd rather be safe than sorry, and assume I don't know the stuff others do, until I have good reason to believe they're just as confused as I am.
I'm also counting on people with reasonable world models to tell me if they think I'm doing something net negative, but until they do, I'll assume I can just do things.
I think this is true in most fields, and even in technical fields where you can learn a lot from reading papers, you learn a lot of very different (and more practical) skills by working with experienced people before you strike out on your own.
I don't see what technical work there is to do at this point. It seems like it's all political
From my perpsective, the biggest point of the political work is to buy time to do the technical work.
(And yeah there's a lot of disagreement about what sort of technical work needs doing and what counts as progress, but that doesn't mean there isn't work to do, it just means it's confusing and difficult)
I too struggle to understand what, if any, technical progress was made on the alignment problem.
LW discourse on alignment reminds me of the Buddha master on the mountaintop describing God: "not this, not that, not that either, and certainly not that." I saw many more posts about what the answer is not than what the answer is.
Knowing that you haven't solved the problem is actually really quite useful and important! I think basically no progress has been made on the alignment problem, but I do think the arguments for why it's not been solved yet are as such really quite important for helping humanity navigate the coming decades.
It's worth saying that applying for things can also yield some benefits. I definitely became a better writer through the various work tests I did when I was applying to lots of training programs. I also got some nice feedback (props to CLR especially!) and the experience helped me to better understand what different orgs & people are working on. I also got a clearer idea of my career aspirations.
This is assuming you get through the very first round and get to do some test tasks though...
What this means in practice is that the "entry-level" positions are practically impossible for "entry-level" people to enter.
I recently applied to an ML/AI upskilling program. My rejection letter pointed out my rather sparse GitHub (fair!) and suggested that I fill it with AI safety-relevant projects... politely skimming over the fact that I can't complete those projects until I learn ML/AI engineering. The cynic in me raises an eyebrow and asks what, other than credentials, the program actually has to offer, if they only accept people who already understand the material. (The gentler side of me says that maybe it's okay if they only offer credentials.)
For what it's worth, I'm glad to see that people more qualified than me are applying to these programs and jobs.
I agree! FIG (as the example most familiar to me) was initially pretty accessible to entry-level folks (in 2023) and now has an acceptance rate of <2%. Entry-level programmes are flooded with applicants, making them both extremely competitive and slow to move. Solely applying to fellowships and jobs is an inadequate strategy for most newcomers to AI safety.
In some sense, fellowships' primary value-add is to systematise and reduce the agency required to find a mentor. But, as you suggest, emailing mentors/organisations directly can be an excellent way to bypass long, competitive application processes.
Based on what I did when I was getting into the space, this is what I'd suggest:
This pathway is much less saturated and competitive, and you only need one good match to change your trajectory.
Though, as OP says, note that this route is not a perfect substitute!
I am a few months into trying this. It tentatively seems to be going well, but will be more confident once I have succeeded/failed at publishing the paper I'm currently working on.
I tried that route as well, delved semi deeply into an alignment-objectives-adjacent subject for ~2 months, but wasn't happy with the EV and length of feedback loop. My timelines are too short.
You may want to consider the optics of telling people to focus on applying to things less, then at the end of the article saying "the the EA Hotel... happens to be an excellent place to apply to."
Sure, the EA Hotel might be legitimately different from other organizations, but I have no reason to believe that.
Crossposted to EA Forum.
TL;DR: Figure out what needs doing and do it, don't wait on approval from fellowships or jobs.
If you...
... I would recommend changing your personal strategy entirely.
I started my full-time AI safety career transitioning process in March 2025. For the first 7 months or so, I heavily prioritized applying for jobs and fellowships. But like for many others trying to "break into the field" and get their "foot in the door", this became quite discouraging.
I'm not gonna get into the numbers here, but if you've been applying and getting rejected multiple times during the past year or so, you've probably noticed the number of applicants increasing at a preposterous rate. What this means in practice is that the "entry-level" positions are practically impossible for "entry-level" people to enter.
If you're like me and have short timelines, applying, getting better at applying, and applying again, becomes meaningless very fast. You're optimizing for signaling competence rather than actually being competent. Because if you a) have short timelines, and b) are honest with yourself, you would come to the conclusion that immediate, direct action and effect is a priority.
If you identify as an impostor...
..applying for things can be especially nerve-wrecking. To me, this seems to be because I'm incentivized to optimize for how I'm going to be perceived. I've found the best antidote for my own impostor-y feelings to be this: Focus on being useful and having direct impact, instead of signaling the ability to (maybe one day) have direct impact.
I find it quite comforting that I don't need to be in the spotlight, but instead get to have an influence from the sidelines. I don't need to think about "how does this look" - just "could this work" or "is this helpful".
And so I started looking for ways in which I could help existing projects immediately. Suddenly, "optimize LinkedIn profile" didn't feel like such a high EV task anymore.
Here's what I did, and recommend folks to try
Identify the risk scenario you'd most like to mitigate, and the 1-3 potentially most effective interventions.
Find out who's already working on those interventions.[1]
Contact these people and look for things they might need help with. Let them know what you could do right now to increase their chances of success.[2]
What I've personally found the most effective is reaching out to people with specific offers and/or questions you need answered in order to make those offers[3]. Address problems you've noticed that should be addressed. If you have a track record of being a reliable and sensible person (and preferably can provide some evidence to support this), and you offer your time for free, and the people you're offering to help actually want to get things done, they're unlikely to refuse[4].
(Will happily share more about my story and what I'm doing currently; don't hesitate to ask detailed questions/tips/advice.)[5]
This work was supported by the EA Hotel, which offers free or low-cost food and accommodation to people working to improve the world, which happens to be an excellent place to visit if you'd like to extend your runway and be surrounded by great people who are mostly working towards AI safety. They do have applications for free, or you can pay to be there very affordably.
Relatedly, the EA Hotel is in urgent need of funding despite being extremely cost effective and having incubated multiple organizations in AI safety. Consider donating, they are blocked on basic maintenance for lack of funds and will run out of money in a few months by default.
If nobody seems to be on the ball, consider starting your own project.
Here it's quite helpful to focus on what you do best, where you might have an unfair advantage, etc.
As a general rule, assume the person you're messaging or talking to doesn't have the time to listen to your takes - get straight to the point and make sure you've done the cognitive labor for them.
I should add that in order to do this you need to have developed a bit of agency, as well as understanding of the field you're trying to contribute to. I'm also assuming that since you have the capacity to apply for things, you also have the capacity to get things done if you trade the time.
Post encouraged and mildly improved by plex based on a conversation with Pauliina. From the other side of this, I'd much rather take someone onto a project who has spent a few months trying to build useful things than spending cycles to signal for applications, even if their projects don't go anywhere. You get good at what you practice. Hire people who do things and go do things. e.g. I once gave the org Alignment Ecosystem Development, which runs all the aisafety.com resources, to a volunteer (Bryce Robertson) who'd been helping out competently for a while. Excellent move! He had proved he actually did good stuff unprompted and has been improving it much more than I would have.
Also! I'd much rather work with someone who's been practicing figuring out inside views of what's actually good to orient their priorities rather than someone looking for a role doing work which someone else thinks is good and got funding to hire for. Deference is the mind-killer.