I have been doing independent research in addition to my Ph.D. for roughly a year now. For the next 6 months, I’ll take a break from my Ph.D. and plan to do AI safety research full-time. I had chats with many people about independent research in the past, e.g. on EAGs or because 80K has connected me with people thinking about pursuing independent research. I had some great experiences with independent research but not everyone does. I think the variance for independent research is large and I'm worried that people get disheartened by bad experiences. So here are some considerations in which situations independent research might be a good idea and some tips that will hopefully improve your experience. 

I’d like to thank Magdalena Wache and Tilman Räuker for their feedback.

TL;DR: At first glance, there is a bit of a paradoxical nature to independent research. If someone wants to pursue independent research they need a research agenda to work on. If they are able to construct a good research agenda, an existing institution often has incentives to hire them. On the flip side, if their research skills are not developed enough to be hired by an existing institution, their independent research might not be very successful. Thus, naively it would seem that there are few cases in which independent research makes sense. However, I think that there are many situations in which independent research or independent upskilling are a great option, e.g. when no established organization is working on the topic you find most promising, as a way to upskill for a job, to gain new research skills or to transition between jobs. Some tips for independent researchers include: getting feedback early on, aiming to collaborate with others and creating accountability mechanisms for yourself such as publishing your results. My most important advice for independent researchers is that you should probably be much more active than in other roles because there is less default structure and more responsibility on you. 

I’ll mostly talk about AI safety research but many of these things probably also apply to other independent research. 

A perceived paradox

Independent research is often presented as one of three default options for people seeking to do EA research, e.g. in AI safety:

  1. Academia, e.g. applying for Ph.D. and post-doc positions with labs that do research aligned with your goals. 
  2. Research positions in industry, e.g. applying for Anthropic, Redwood Research, Deepmind, OpenAI or Conjecture. 
  3. Independent research, e.g. supported by an EA grant. 

Doing independent research well requires a multitude of skills. The independent researcher needs to be able to set their own agenda, they require some basic research skills, self-discipline and some way of evaluating and correcting their own research. These are skills that usually don’t come naturally but need to be learned and refined. In most standard career paths, e.g. within a Ph.D. or in an industry research team people have mentors who help them and ensure that they actually learn these skills. By default, independent research does not ensure that these skills are actually acquired. 

The perceived paradox is now that if someone has the skills required to do high-quality independent research, existing institutions often want to hire them. If they don’t have these skills yet, the research they will produce independently is unlikely to be of high quality or conducted efficiently (unless they have mentorship or are especially talented). Thus, naively, it seems like there aren’t that many situations in which independent research makes sense.

However, I think there are many cases in which independent research makes a lot of sense and there there are a lot of tips that can improve it. As with all things, independent research obviously has its trade-offs. 

When does independent research make sense?

I think the most important question that someone who thinks about doing independent research should ask themselves is whether they want to do independent research or upskilling (and how much of each). The goal of independent research is to produce a research result, e.g. increasing or refining the pool of existing knowledge. The goal of upskilling, on the other hand, is to increase your skills and knowledge. It doesn’t have the aim of finding anything new. You could also draw the distinction between “independent research with the main goal of producing impactful results” and “independent research with the main goal of becoming good at research (or other skills such as research engineering)”. Research and upskilling obviously correlate a bit but I think it’s really helpful to 

  1. Think about what you actually want to do before you start. Often, when I give advice to people who consider doing independent research, we realize during the discussion that they actually want to focus more on understanding and replicating the existing landscape than producing new research. Even if your goal is to do new research, it might be helpful to get very familiar with the current research before you embark on your own journey. 
  2. Re-evaluate your goals from time to time. Your personal goals can change over time or you might realize that you are not a good fit for the path you are currently pursuing. In this case, it might be the best choice to pivot either to a different topic, approach or goal. There is no need to stick to a project if you are confident that something else is better or you are not a good fit. Often it’s hard to evaluate whether we have “already tried hard enough” and thus stick to the current path. However, this often means that we cut our losses too late. I find the following simple heuristic helpful to make such a decision (which is common in grant-making): “how likely would exploring this avenue further change your final decision?”. Often I know quite early that I don’t think a project is worth it, even if it were successful and thus just drop it. No need to feed the sunk cost fallacy. In a classic environment, your supervisor might urge you to cut your losses, in independent research you have to make that decision yourself, so regularly evaluating your goals and progress helps. 


 

To give a better sense of when independent research makes sense, I think the following situations are plausible candidates:

  1. If they work on a side or transition project: People who already have some experience with research, e.g. Master's or Ph.D. students, can do independent research related to AI safety if their primary research doesn’t directly involve it. They usually already have some of the research skills required and are thus able to make meaningful progress even though they are not already experts in the field. I think this is the most obvious candidate for short-term (e.g. less than a year) independent research. 
  2. If their agenda is not part of an existing research group: Some researchers have AI safety agendas that are currently not a priority at any of the existing AI safety institutions. If they are able to convince any of the funders that their agenda is worth pursuing, they can become an independent researcher. I think John Wentworth is a very good example of someone who I’d put in this category and Neel Nanda is on a similar path. I think people who follow this path should aim to get into a mentorship position sooner or later. If they were able to come up with a sufficiently new agenda and got funding for it, it’s probably worth pursuing for more than one person. For example, John Wentworth supervised many scholars during the last iteration of SERI MATS with great outcomes (e.g. improved research ability and concrete outputs). In the long run, this could lead to a new organization but doesn’t have to. 
  3. If they are not interested in a full-time job: Sometimes people want to do independent research but are not interested in a full-time job. This might be because they want to spend a lot of time with their family and friends or because they have other commitments that don’t allow them to manage a full-time position. In this case, independent research seems like a good alternative that allows them to do some research without constraining them too much in their other activities.
  4. If there are currently not enough positions in existing institutions: There are many talented researchers interested in contributing through direct work in AI safety. However, unfortunately, there are not always enough positions in existing institutions to accommodate everyone. While very unfortunate, I hope that this is only a temporary problem and that there will be more positions for alignment researchers in the future. To prepare for a future position, full- or part-time independent research can be a good option. 
  5. If they want to stay independent: there are, of course, other reasons to stay independent than the ones I suggested above, e.g. if someone wants to live in a specific location, has a high preference for flexibility or complete research freedom, and so on. 

While the above points are primarily about independent research, I want to emphasize again that upskilling is sometimes the better path depending on your career goals. In that case, I would probably work through Jacob Hilton’s or Gabriel Mukobi’s curriculum on my own or with collaborators. 

Tips for independent research 

I think independent research is a very high-variance path, i.e. because it has so little structure and oversight, some people excel at it while others get nearly nothing out of it. Furthermore, the “default” path for independent research is probably relatively far away from fulfilling its full potential because good execution requires much more active effort than alternative paths. 

I guess, this default path roughly looks like this: “Someone is excited about alignment and wants to contribute. They come up with a research idea and write a proposal to a funder. The proposal is plausible to someone with general knowledge about alignment and the grant gets approved. The researcher starts with their project and makes some initial progress. However, they also hit some unexpected roadblocks. After a while, the roadblocks turn out to be more substantial than initially expected and they reduce the scope of their project to get around them. The smaller scope of the project is still too ambitious and they carve out a small subquestion of their original proposal and do some research on that. They make progress but the grant is nearing its end so they write up their unfinished project and publish it. They get some positive feedback for the effort and preliminary findings but feel personally dissatisfied with how the project went. They then move on to do other things.” 

I don’t want to criticize people who have roughly followed this path during their independent research but I think we can agree that there are a lot of things that could have gone better here. One of the reasons why I expect some version of this to be the default path rather than the more optimal version is that people are unaware of the paths they could take or at least don’t think that these paths are open to them. This is because most of the things that improve your independent research require an active effort by you, e.g. YOU have to reach out to another researcher, YOU have to make a plan for yourself, YOU have to create your own accountability mechanism and YOU have to expose your own ideas to get feedback. All of this can feel scary, especially when you aren’t (or at least don’t feel like) an established member of the community but I think it is crucial for the success of your project. 

Some tips to increase the probability of success for your independent research efforts include:

  1. Write down your project and get feedback early on: Before you start doing stuff, there should be a document (maybe 2-3 pages) answering some very basic questions like “what is the goal of this project?”, “what is the theory of change for this project?”, “what concrete steps do you want to take?”, “how does it relate to other research on this topic?”, etc. After you think you can answer these questions sufficiently well, find people to give you feedback. These might be people you know from EAG or have seen writing about similar topics on the alignment forum. Just send them a friendly message and send them your google doc in case they agree to give some feedback. Sometimes they don’t have any time or can’t provide any feedback but might refer you to someone who does. Keep in mind that they are busy so you can’t expect them to give you very detailed feedback at this point. The information you should be aiming for is “does a more senior person think this project is worthwhile and actually within the scope of independent research?”. Also, there is a delicate balance here. If you send out your ideas before you have developed them a bit on your own, you might get feedback that you could have given yourself. If you send it out too late, you might have wasted some time because a more senior person could have told you that your plan probably doesn’t work as intended.
  2. Actively look for collaborations: Collaborations have many advantages, e.g. it’s motivating, and you can share ideas and debug each other. Collaborations can also come in many forms ranging from monthly quick calls to working very closely together.
    There are probably people somewhere in the alignment space who work on problems related to yours. They might be other independent researchers, work for an EA organisation or somewhere in academia. It’s not always clear who these potential collaborators are because most people don’t publicly state what they are working on all the time or at least not in a place that is visible to you. Therefore, you should assume that the burden of finding these other people is on you. Possible ways of finding them include: searching through the different forums for topics related to yours, posting your interest in collaboration on the alignment slack, using your network and asking around or using conferences such as EAG to find interested people. 
  3. Join a group or program: There are multiple programs within the AI safety community and I think joining them is almost surely better than doing research on your own if there is an overlap in interest. The programs I have in mind are SERI MATSMLABREMIXRefine, the AI safety campAGISF, etc. Unfortunately, these programs are usually restricted to a specific time of the year and have a lot of applicants. But even if you think your chance of getting accepted is low, you should still apply because the payoffs are so high. Additionally, some applications are designed to provide value even if you don’t end up getting accepted, e.g. SERI MATS has a ton of interesting questions in their application. 
  4. Create accountability mechanisms: Some people have a very exploratory view on research, e.g. they think something like “we don’t know what the outcome is, things can change rapidly, there is no reason to plan it out beforehand”. I prefer and recommend a more product-driven approach to research, i.e. from the beginning, you intend to publish a post that answers your research question. This post can be about a negative finding, e.g. your post can say “I tried the following things and nothing worked” but in my eyes, committing to a product is very helpful to make progress. There are other ways of keeping yourself accountable, e.g. by defining intermediate goals or criteria by which you could evaluate if your research was successful. Most of this is neither new nor complicated you really just have to do it.
  5. Be very clear on what your goals are: I think you should be clear about what your goals for the independent research are from the start. For most people, independent research is probably temporary, e.g. they want to do independent research for half a year and then use the new skills to apply to an existing AI safety institution. In that case, the goal could be something like “provide evidence that I understand the scientific frontier in my research area and am able to contribute”. In case the person wants to do independent research for longer, e.g. multiple years, they still have to get funding, etc. Thus, a goal could be “provide evidence that I have a solid plan for the next years and am able to execute it”. In most other paths, you’ll get a certificate at some point, e.g. a Bachelor’s or Master’s degree. In independent research you get nothing by default, you basically have to make the certificate yourself. 
  6. Do whatever is most effective to reach your goal. I think there are a lot of stereotypes and narratives about research that are unhelpful. For example, I sometimes hear that doing novel research is better than replicating existing work, which I don’t think is true at all. Or sometimes people first want to “understand every other piece of work on the topic” before starting their own research or think that they have to finish every paper they start reading. In reality, I think the most effective strategy is often a messy back-and-forth between reading/skimming existing work and working on your own project. Sometimes the best strategy is to dive right into code, sometimes you should first try to understand more background literature. The variance between projects is so high that I wouldn’t recommend always following the same pattern but making a plan for that particular project and then executing and adapting it. 

Final thoughts

I think independent research serves an important role in the EA and AI safety landscape but it comes with up- and downsides. Concretely, I’m worried that independent research becomes the one-size-fits-all solution when the capacity of established EA institutions is too low. I think there is a risk that doing independent research, if executed suboptimally, could waste a lot of talent (due to opportunity costs), or disheartened independent researchers will not want to continue contributing to EA/AI safety if they had a bad experience. 

My personal experience with independent research was great and I want other people to get the most out of it. I hope my thoughts on independent research are helpful. Feedback and disagreement are appreciated. 


 

New Comment
5 comments, sorted by Click to highlight new comments since:

While the above points are primarily about independent research, I want to emphasize again that upskilling is sometimes the better path depending on your career goals.

Seconded.

When I started to take alignment seriously, I wanted to do something that felt valuable, and work on something entirely original.  I think that I gained a lot of value out of trying to perform at that level especially in terms of iterating on what I'd gotten wrong with actual feedback, but the truth was that I simply didn't have the context required to know intuitively what new was, or what parts were likely to be difficult under current paradigms.

It was useful, because it was only after that that I tried to think about the hard part of the problem and converged on ontology translation (at the time), but having expected myself to contribute something original early, I was discouraged and procrastinated on doing more for a long time.  Going in wanting to skill-up for a while is most likely the right mindset to have, in my opinion.

I was an independent AGI safety researcher because I didn't want to move to a different city and (at the time, it might or might not have changed in the past couple years) few if any orgs that might hire me were willing to hire remote workers.

Good advice. Some ways to make it even better:

  • Mention prior advice, e. g. by John Wentworth (and I'm pretty sure I saw more; maybe we need a tag "Independent Research Career" here on LW)
  • In the context of upskilling, refer to the call for distillers. Distilling (summarising, explaining, connecting with other ideas) is not just very valuable for the community (because, historically, it suffers from the too big "idea surface ratio" per a researcher, some NIH (not invented here) tendencies, and a disconnect from the academia), but also critical for learning because without writing things down, almost nothing from what has been read is retained and used long term.
  • Refer to some resources/reading that should help one to become a better researcher independently (which means, without much learning by absorbing practices from peers in an academic institution, and without much mentorship/guidance), such as:

The invite links to the AI Alignment Slack are no longer active, does the author (or anyone else) know how to join this Slack now?

Updated the slack link. Thanks for spotting it.