Review

This list of field building ideas is inspired by Akash Wasil’s and Ryan Kidd’s similar lists. And just as the projects on those lists, these projects rely on people with specific skills and field knowledge to be executed well.

None of these ideas are developed by me exclusively; they are a result of the CanAIries Winter Getaway, a 2-week-long, Unconference-style AGI safety retreat I organized in December 2022.

Events

Organize a global AGI safety conference

This should be self-explanatory: It is odd that we still don’t have an AGI safety conference that allows for networking and lends the field credibility.

There are a number of versions of this that might make sense:

  • an EAG-style conference for people already in the community to network
  • an academic-style conference engaging CS and adjacent academia
  • an industry-heavy conference (maybe sponsored by AI orgs?)
  • a virtual next-steps conference, e.g. for AGISF participants

Some people have tried this out at a local level: https://aisic2022.net.technion.ac.il 

(If you decide to work on this: www.aisafety.global is available via EA domains, contact hello@alignment.dev)

Organize AGI safety professionals retreats

As far as I can see, most current AGI safety retreats are optimized for junior researchers: Networking and learning opportunities for students and young professionals. Conferences with their focus on talks and 1-on-1s are useful for transferring knowledge, but don’t offer the extensive ideation that a retreat focused on workshops and discussion rounds could.

Organizing a focused retreat for up to 60-80 senior researchers to debate the latest state of alignment research might be very valuable for memetic cross-pollination between approaches, organizations, and continents. It might also make sense to do this during work days, so that peoples’ employers can send them. I suspect that the optimal mix of participants would be around 80% researchers, and the rest funders, decisionmakers, and the most influential field builders.

Information infrastructure

Start an umbrella AGI safety non-profit organization in a country where there is none

This would make it easier for people to join AGI safety research, and could offer a central exchange hub. Some functions of such an org could include:

  • Serving as an employer of records for independent AGI safety researchers.
  • Providing a central point for discussions, coworking, publications. You probably want a virtual space to discuss, like a Discord or Slack, named after your country/area and list it on https://coda.io/@alignmentdev/alignmentecosystemdevelopment, then make sure to promote this and have it be discoverable by people interested in the field. The Discord/Slack can then be used to host local language online or in person meetups.

A candidate for doing this mostly needs ops/finance skill, not a comprehensive overview of the AGI safety field.

Mind that Form Follows Function: Try to do this with as little administrative and infrastructure overhead as possible. Find out whether other orgs already offer the relevant services (For example, AI Safety Support offer Ops infrastructure to other alignment projects, and national EA orgs like EA Germany offer employer of record-services). Build MVPs before going big and ambitious.

In general, the cheap minimum version of this would be becoming an AGI Safety Coordinator.

Become an AGI Safety Coordinator

It would be useful to have a known role and people filling the role of Coordinators. These people would not particularly have decision power or direct impact, but their job is to know what everyone is doing in AGI safety, to collect resources, organize them, publish them, to help people know who to work and collaborate with. Ideally, they would also serve as a bridge between the so far under-connected areas of AGI safety and AI policy.

Some of the members of AI Safety Support have been doing similar things, but they are mostly recognized by the community of new members, and might not be utilized by the established organizations and people. The role of the Coordinator is to also be known by the established organizations and people.

Create a virtual map of the world where Coordinators can add themselves

This would make it way easier for people to find each other. In https://eahub.org/, an attempt to gather *all* members of the EA community in one place failed due to buy-in being too costly for individuals.

Instead, we might want to have a database of key coordinators and information nodes in the community. A handful of people would be enough to maintain it, and it probably would never list more than ~200 people, grouped by location, as the go-to addresses for local knowledge.

Create and maintain a living document of AGIS field building ideas

The minimum version of this would be a maintained list of ideas like the ones in Akash’s, Ryan’s, and this post.

Useful functions:

  • Anyone can add new ideas
  • People can tag themselves as interested in working on/funding a certain idea
  • A way to filter by expected quality of ideas. A tremendous and underexplored model for doing this is the EigenKarma system a handful of people are currently developing. See here for a draft.
  • a function for commenting on ideas in order to improve them, or to flag ineffective/high-downside-risk ones

A sensible existing project to build this into would be Apart Research’s https://aisafetyideas.com/. While their interface is optimized for research proposals, the list under https://aisafetyideas.com/?categories=Field-Building might be a good minimum viable product for a field building version.

Other examples of living documents that might serve as inspiration for this: https://aisafety.worldhttps://aisafety.community

Funding

Make it easier for AGI safety endeavors to get funding from non-EA sources

Our primary funding sources have suffered last year, and there are numerous foundations and investors out there happy to invest into potentially world-saving and/or profitable projects. Especially now, it might be high-leverage to collect knowledge and build infrastructure for tapping into these funds. I lack the local knowledge to give recommendations for how to tap into funding sources within academia. However, here are four potential routes for tapping into non-academic funding sources:

1. Offer a service to proofread grant applications and give feedback. That can be extremely valuable for relatively little effort. Many people don't want to send their application to a random stranger, but maybe people know you from the EA forum? Or you can just offer giving feedback to people who already know you.

2. Identify more relevant funding sources and spread knowledge about them. https://www.futurefundinglist.com/ is a great example: It's a list of dozens of longtermist-adjacent funds, both in and outside the community. (Though apparently, it is not kept up-to-date: The FTX Future Fund is still listed as of Jan 19 2023.)

Governments, political parties, and philanthropists often have nation-specific funds happy to subsidize projects. Expanding the Future Funding List further and finding/building similar national lists might be extremely valuable. For example, there is a whole book with funding sources for charity work in German.

3. Become a professional grant writer. A version of this that is affordable for new/small orgs and creates decent incentives and direct feedback for grantwriters might be a prize-based arrangement: Application writers get paid if and only if a grant gets through.

If you are interested in this and already bring an exceptional level of written communication skills, a reasonable starting point may be grant writing courses like https://philanthropyma.org/events/introduction-grant-writing-7.

4. Teach EAs the skills to communicate their ideas to grantmakers. Different grantmakers have different values and lingos. If you want to convince them to give you money, you have to convince them in their world. This is something many AGI safety field builders didn’t have to learn so far. Accordingly, a useful second step after becoming a grant writer yourself might be figuring out how to teach grant writing as effectively as possible to the relevant people. (A LessWrong/EA Forum sequence? Short rainings in pitching and grantwriting?)

Write a guide for how to live more frugally, optimized to the needs of members of the AGI safety community

The more frugally people live, the more independent they are from requiring a day job. In addition, the same amount of grantmaker money could support a larger number of individuals. Accordingly, pushing the idea of frugality by writing an engaging guide for how to do efficient altruism might help us do more research per dollar earned and donated by community members.

Some resources that such a guide should contain:

Potential downside risk: This route may be particularly attractive to relatively junior people with few connections to the established orgs. Being well-connected in the community is crucial, both for developing good ideas, and for developing the necessary network to get employed later. Accordingly, a good version of this guide would discourage people to compromise too strongly on being close to other community members for the sake of frugality.

Outreach and onboarding

Run the Ops for more iterations of ML4Good

French AGI safety field building org Effiscience has run several iterations of the machine learning bootcamp ML4Good, which teaches technical knowledge as well as AGI safety fundamentals, so as to produce more AGI safety researchers or research engineers. It’s got a proven track record of getting people involved and motivated to do more AGI safety work (see the writeup for details), and can dispatch instructors to teach these bootcamps. Thus, the constraint to scale is having organizers run the operations work (promoting the event, getting inscriptions, getting an event location…) to run new iterations in various countries.

If interested, contact jonathan.claybrough[at]gmail.com

Set up a guest appearance of an AI safety researcher with exceptional outreach skills on a major Street Epistemology YouTube channel

For preventing a negative singularity, AGI safety research must move faster than capabilities research. Two attack routes for that are a) to speed up AGI safety research, and b) to slow down capabilities research. One way to do b) would be to get more capabilities researchers to be concerned about AGI safety. The general community consensus seems to be that successful outreach to capabilities researchers would be extremely valuable, and that unsuccessful outreach would be extremely dangerous. Accordingly, hardly anyone is working on this.

Street Epistemology is the atheist response to Christian street preachers. Street epistemologists use the socratic method to assist people in questioning why they believe what they believe, often leading to updates in confidence. More info on https://streetepistemology.com/

Bringing more SE skills into the AGI safety community, or more capable Street Epistemologists into AGI safety, might help us make it sufficiently safe to do outreach to capabilities researchers. Bonus: Street Epistemologists only need just enough object-level knowledge of the topic at hand to be able to follow their conversation partner, not to argue against them. Accordingly, a solid understanding of SE and a basic background in machine learning might be enough to have useful and low-risk SE-style conversations with capabilities researchers.

A core route of memetic exchange within the SE community are a number of YouTube channels where street epistemologists film themselves nudging strangers to examine their core beliefs. If an AGI safety researcher with great teaching skills were to appear as a conversation partner on one of these channels, that might get more Street Epistemologists concerned enough that they join the AGI safety community and spread their memeplex.

Do workshops/outreach at good universities in EA-neglected and low/middle income countries

(E.g India, China, Japan, Eastern Europe, South America, Africa, …)

Talent is spread way more evenly across the globe than our outreach and recruitment strategies. Expanding those to other countries might be a high-leverage opportunity to increase the talent inflow into AGI safety. For example, Morocco, Tunisia, and Algeria have good math unis.

One low-hanging fruit here might be to pay talented graduates for a fellowship at leading AGI Safety labs.

Improve the landscape of AGI safety curricula

Get an overview of the existing AGI safety curricula. Find out what’s missing, e.g. for particular learning styles/levels of seniority. Make it exist.

Publishing mediocre curricula is probably net negative at this point, because it draws attention away from the already existing good ones. What is needed at this point of the alignment curricula landscape is careful vetting, identifying gaps, and filling them with new well-written, well-maintained curricula. Particularly, we might need more curricula on AI governance, or on foundational concepts for field building, with curated resources on topics like MVP-building, project management, the existing infrastructure, etc. 

For hints on what specifically is missing, this LW post on the 3-books-technique for learning a new skill might be a useful framework. Also, mind that different people have different learning styles: Some learn best through videos, others through text, audio, or practical exercises.

Some great examples for curricula: 

Other

Support AGI safety researchers with your skills

There are countless ways to help AGI safety researchers through non-AGI safety-related skills so that they have more time and energy for their work. Make yourself easily findable and approachable.

Bonus points for creating infrastructure to enable this. One version would be a Google form/sheet where people can add their respective skills. 

 Existing services include:

Other skills that might be valuable:

  • Software support, e.g. python support, pair programming
  • Personal assistance
  • Productivity coaching
  • Tutoring (e.g. math topics, coding, neuroscience, …)
  • Visa support
  • Tax support

Do this now:

  • (1-5 min brainstorming) What skills do you have? Could they be used to support researchers?

Find new AGI safety community building bottlenecks

Survey people for what they need / what their biggest bottlenecks are. People coming out of seri mats etc, working researchers.

General Tips

  • Trust your abilities! You might feel like there are other people who would do a better job than you in organizing the project. But: If the project isn’t being done, it looks like whoever could do it is busy doing even more important things.
  • Get feedback! If people don’t coordinate, they might try the same thing twice or more often. In addition, especially outreach-related projects can have a negative impact. Things you might want to do if you consider working on outreach-related projects:
    • Ask on the AI Alignment Slack.
    • Write me a message here, and I’ll connect you to relevant people.
  • Cooperate! Launching projects aiming for global optima sometimes works differently than the intuitions we built in competitive settings.
    • Make use of the existing infrastructure: Building background infrastructure is costly. Instead of going freelancer/founding a new org, consider reaching out to existing orgs whether it makes sense for them to incorporate your projects. Examples include AI Safety SupportA*PART Research, and Alignment Ecosystem Development, the team behind aisafety.info and other projects.
    • Make it easy for people to propose improvements and collaborations to your project: Have an “about”-page, “suggest”-button, admonymous-account, …
  • Delegate! As much as possible, as little as necessary.
    • If you develop more ideas than you can execute on, write up lists like this one. You could also ask junior researchers/community builders whether they’d be up for picking up your dropped projects.
    • If you have the necessary funds, consider hiring a PA via https://pineappleoperations.org/ to do Ops work you don’t have the slack for.
  • Test your hypotheses! The Lean Startup approach offers a valuable framework for this. Consider reading some of the relevant literature. The 80/20 version is grokking this artikle by Henrik Kniberg: Making sense of MVP (Minimum Viable Product).
  • “Ideas have no value; only execution and people have!” Mind the explore-exploit-tradeoff and actually do what is the best option you currently have available. Collating this list was fun, but if all of us just make lists all day...

 

Thanks to the following people for their contributions and comments: Jonathan Claybrough, Swante Scholz, Nico Hillbrand, Magdalena Wache, Jordan Pieters, Silvio Martin.

New Comment
28 comments, sorted by Click to highlight new comments since:

I work on the events team at CEA, and I'm currently (lightly) exploring supporting a global AGI safety conference in 2024. It probably won't be CEA-branded, or even EA-branded, I'm just keen to make it happen because we run a lot of conferences and it seems like we'd be able to handle the ops fairly well.

If you're interested in helping or giving feedback, feel free to reach out to me at ollie@eaglobal.org :) 

I think having something like an AI Safety Global would be very high impact for several reasons. 

  1. Redirecting people who are only interested in AI Safety from EAG/EAGx to the conference they actually want to go to. This would be better for them and for EAG/EAGx. I think AIS has a place at EAG, but it's inefficient that lots people go there basically only to talk to other people interested in AIS. That's not a great experience either for them, or for the people who are there to talk about all the other EA cause areas.
  2. Creating any amount of additional common knowledge in the AI Safety sphere. AI Safety is begging big and diverse enough that different people are using different words in different ways, and using different unspoken assumptions. It's hard to make progress on top of the established consensus when there is no established consensus. I defiantly don't think (and don't want) all AIS researchers to start agreeing on everything. But just some common knowledge of what other researches are doing would help a lot. I think that a yearly conference where each major research group gives an official presentation of what they are doing and their latest results, would help a lot. 
  3. Networking. 

I don't think that such a conference should double as a peer-review journal, the way many ML and CS conferences do. But I'm not very attached to this opinion. 

I think making it not CEA branded is the right choice. I think it's healthier for AIS to be it's own thing, not a sub community of EA, even though there will always be an overlap in community membership.

What's your probability that you'll make this happen?

I'm asking because if you don't do this, I will try to convince someone else to do it. I'm not the right person to organise this my self. I'm good at smaller, less formal events. My style would not fit with what I think this conference should be. I think the EAG team would do a good job at this though. But if you don't do it someone else should. I also think the team behind the Human-aligned AI Summer School would do a good job at this, for example.

I responded here instead of over email, since I think there is a value in having this conversation in public. But feel free to email me if you prefer. linda.linsefors@gmail.com 

Thanks, Linda!

I agree with your claims about why this event might be valuable. In fact, I think 3 might be the biggest source of value.

I also agree AIS should be it's own thing, that was part of the motivation here. It seems big enough now to have it's own infrastructure (though I hope we'll still have lots of AIS researchers attend EAG/EAGx events).

Probabilities:

  • 75% the CEA events team supports an event with at least 100 people with an AIS focus before end of 2024.
  • 55% the CEA events team supports an event with at least 500 people with an AIS focus before end of 2024.

Thanks for adding clarity! What does "support" mean, in this context? What's the key factors that prevent the probabilities from being >90%?

If the key bottleneck is someone to spearhead this as a full-time position and you'd willingly redirect existing capacity to advise/support them, I might be able to help find someone as well.

oops, sorry, I don't check LW often!

I use support to allow for a variety of outcomes - we might run it, we might fund someone to run it, we might fund someone and advise them etc.

What's the key factors that prevent the probabilities from being >90%?

Buy-in from important stakeholders (safety research groups, our funders etc.). That is not confirmed.

If the key bottleneck is someone to spearhead this as a full-time position

This isn't the key bottleneck, but thank you for this offer!

My argument here is very related to what jacquesthibs mentions.

Right now it seems like the biggest bottleneck for the AI Alignment field is senior researchers. There are tons of junior people joining the field and I think there are many opportunities for junior people to up-skill and do some programs for a few months (e.g. SERI MATS, MLAB, REMIX, AGI Safety Fundamentals, etc.). The big problem (in my view) is that there are not enough organizations to actually absorb all the rather "junior" people at the moment. My sense is that 80K and most programs encourage people to up-skill and then try to get a job at a big organization (like Deepmind,  Anthropic, OpenAI, Conjecture, etc.). Realistically speaking though, these organizations can only absorb a few people in a year. In my experience, it's extremely competitive to get a job at these organizations even if you're a more experienced researcher (e.g. having done a couple of years of research, a Ph.D., or similar). This means that while there are many opportunities for junior people to get a stand in the field, there are actually very few paths that actually allow you to have a full-time career in this field (this is also for more experienced researchers who don't get a big lab). So the bottleneck in my view is not having enough organizations, which is a result of not having enough senior researchers. Funding an org is super hard, you want to have experienced people, with good research taste, and some kind of research agenda. So if you don't have many senior people in a field, it will be hard to find people that fund those additional orgs.

Now, one career path that many people are currently taking, is being an "independent researcher" and being funded through a grant. I would claim that this is currently the default path for any researcher who do not get a full-time position and want to stay in the field. I believe that there are people out there who will do great as independent researchers and actually contribute to solving problems (e.g. Marius Hobbhahn and John Wenthworth talk bout being an independent researchers). I am however quite skeptical about most people doing independent research without any kind of supervision. I am not saying one can't make progress, but it's super hard to do this without a lot of research experience, a structured environment, good supervision, etc. I am especially skeptical about independent researchers becoming great senior researchers if they can't work with people who are already very experienced and learn from them. Intuitively I think that no other field has junior people independently working without clear structures and supervision, so I feel like my skepticism is warranted. 

In terms of career capital, being an independent researcher is also very risky. If your research fails, i.e. you don't get a lot of good output (papers, code libraries, or whatever), "having done independent research for a couple of years" will not sound great in your CV.  As a comparison, if you somehow do a very mediocre Ph.D. with no great insights, but you do manage to get the title, at least you have that in your CV (having a Ph.D. can be pretty useful in many cases).

So overall I believe that decision makers and AI field builders should put their main attention on how we can "groom" senior researchers in the field and get more full-time positions through organizations. I don't claim to have the answers on how to solve this. But it does seem the greatest bottleneck for field building in my opinion. It seems like the field was able to get a lot more people excited about AI safety and to change their careers (we still have by far not enough people though). However right I think that many people are kind of stuck as junior researchers, having done some programs, and not being able to get full-time positions. Note that I am aware that some programs such as SERI MATS do in some sense have the ambition of grooming senior researchers. However, in practice, it still feels like there is a big gap right now.

My background (in case this is useful): I've been doing ML research throughout my Bachelor's and Masters. I've worked at FAR AI on "AI alignment" for the last 1.5 years, so I was lucky to get a full-time position. I don't consider myself a "senior" researcher as defined in this comment, but I definitely have a lot of research experience in the field. From my own experience, it's pretty hard to find a new full-time position in the field, especially if you are also geographically constrained.

Intuitively I think that no other field has junior people independently working without clear structures and supervision, so I feel like my skepticism is warranted. 

Einstein had his miracle year in such a context. 

Modern academia has few junior people independently working without clear structures and supervision, but pre-Great Stagnation that happened more. 

Generally, pre-paradigmatic work is likely easier to do independently than post-paradigmatic work. That still means that most researchers won't produce anything useful but that's generally common for academic work and if a few researches manage to do great paradigm founding work it can still be worth it over all.

A few thoughts:

  1. I agree that it would be great to have more senior researchers in alignment

  2. I agree that, ideally, it would be easier for independent researchers to get funding.

  3. I don’t think it’s necessarily a bad thing that the field of AI alignment research is reasonably competitive.

  4. My impression is that there’s still a lot of funding (and a lot of interest in funding) independent alignment researchers.

  5. My impression is that it’s still considerably easier to get funding for independent alignment research than many other forms of independent non-commercial research. For example, many PhD programs have acceptance rates <10% (and many require that you apply for independent grants or that you spend many of your hours as a teaching assistant).

  6. I think the past ~2 months has been especially tough for people seeking independent funding, given that funders have been figuring out what to do in light of the FTX stuff & have been more overwhelmed than usual.

  7. I am concerned that, in the absence of independent funding, people will be more inclined to join AGI labs even if that’s not the best option for them. (To be clear, I think some AGI lab safety teams are doing reasonable work. But I expect that they will obtain increasingly more money/prestigé in the upcoming years, which could harm peoples’ ability to impartially assess their options, especially if independent funding is difficult to acquire).

Overall, empathize with concerns about funding, but I wish the narrative included (a) the fact that the field is competitive is not necessarily a bad thing and (b) funding is still much more available than for most other independent research fields.

Finally, I think part of the problem is that people often don’t know what they’re supposed to do in order to (honestly and transparently) present themselves to funders, or even which funders they should be applying to, or even what they’re able to ask for.. If you’re in this situation, feel free to reach out! I often have conversations with people about career & funding options in AI safety (Disclaimer: I’m not a grantmaker).

Thanks for your comments Akash. I think I have two main points I want to address.

  1. I agree that it's very good that the field of AI Alignment is very competitive! I did not want to imply that this is a bad thing. I was mainly trying to point out that from my point of view, it seems like overall there are more qualified and experienced people than there are jobs at large organizations. And in order to fill that gap we would need more senior researchers, who then can follow their research agendas and hire people (and fund orgs), which is however hard to achieve. One disclaimer I want to note is that I do not work at a large org, and I do not precisely know what kinds of hiring criteria they have, i.e. it is possible that in their view we still lack talented enough people. However, from the outside, it definitely does look like there are many experienced researchers. 
  2. It is possible that my previous statement may have been misinterpreted. I wish to clarify that my concerns do not pertain to funding being a challenge. I did not want to make an assertion about funding in general, and if my words gave that impression, I apologize. I do not know enough about the funding landscape to know whether there is a lot or not enough funding (especially in recent months). 

    I agree with you that, for all I know, it's feasible to get funding for independent researchers (and definitely easier than doing a Ph.D. or getting a full-time position). I also agree that independent research seems to be more heavily funded than in other fields.

    My point was mainly the following: 
    1. Many people have joined the field (which is great!), or at least it looks like it from the outside. 80000 hours etc. still recommend switching to AI Alignment, so it seems likely that more people will join.
    2. I believe that there are many opportunities for people to up-skill to a certain level if they want to join the field (Seri Mats, AI safety camp, etc.). 
    3. However full-time positions (for example at big labs) are very limited. This also makes sense, since they can only hire so many people a year. 
    4. It seems like the most obvious option for people who want to stay in the field is to do independent research (and apply for grants). I think it's great that people do independent research and that one has the opportunity to get grants. 
    5. However, doing independent research is not always ideal for many reasons (as outlined in my main comment). Note I'm not saying it doesn't make sense at all, it definitely has its merits.
    6. In order to have more full-time positions we need more senior people, who can then fund their organizations, or independently hire people, etc. Independent research does not seem like a promising avenue to me, to groom senior researchers. It's essential that you can learn from people that are better than you and be in a good environment (yes there are exceptions like Einstein, but I think most researchers I know would agree with that statement).
    7. So to me, the biggest bottleneck of all is how can we get many great researchers and groom them to be senior researchers who can lead their own orgs. I think that so far we have really optimized for getting people into the field (which is great). But we haven't really found a solution to grooming senior researchers (again, some programs try to do that and I'm aware that this takes time). Overall I believe that this is a hard problem and probably others have already thought about it. I'm just trying to make that point in case nobody has written it up yet. Especially if people are trying to do AI safety field building it seems to me that, coming up with ways to groom senior researchers is a top priority.

Ultimately I'm not even sure whether there is a clear solution to this problem. The field is still very new and it's amazing what has already happened. It's probable that it just takes time for the field to mature and people getting more experience. I think I mostly wanted to point this out, even if it is maybe obvious.

Overall I believe that this is a hard problem and probably others have already thought about it.

I'm not sure people seriously thought about this before, your perspective seems rather novel.

I think existing labs themselves are the best vehicle to groom new senior researchers. Anthropic, Redwood Research, ARC, and probably other labs were all found by ex-staff of existing labs at the time (except that maybe one shouldn't credit OpenAI for "grooming" Paul Cristiano to senior level, but anyways).

It's unclear what field-building projects could incentivise labs to part with their senior researchers and let them spin off their own labs. Or to groom senior researchers "faster", so to speak.

If the theory that AI alignment is extremely competitive is right, then logically both the labs shouldn't cling to their senior people too much (because it will be relatively easy to replace them), and senior researchers shouldn't worry about starting their own projects too much because they know they can assemble a very competent team very quickly.

It seems that it's only the funding for these new labs and their organisational strategy which could be a point of uncertainty for senior researchers that could deter them from starting their own projects (apart from, of course, just being content with the project they are involved in at their current jobs, and their level of influence on research agendas).

So, maybe the best field-building project that could be done in this area is someone offering knowledge about and support through founding, funding, and setting a strategy for new labs (which may range from brief informal consultation to more structured support, a-la "incubator for AI safety labs") and advertise this offering among the staff of existing AI labs.

Overall, empathize with concerns about funding, but I wish the narrative included (a) the fact that the field is competitive is not necessarily a bad thing and (b) funding is still much more available than for most other independent research fields.

I didn’t mention this in my comment, but I also agree with this. Apologies if it seemed otherwise. I was mostly expressing a bit of concern about how how funding will be dispursed going forward, from a macro-perspective.

Ah, thanks for the clarifications. I agree with the clarified versions :)

Quick note on getting senior researchers:

  • It seems like one of the main bottlenecks is "having really good models of alignment."
  • It seems plausible to me that investing in junior alignment researchers today means we'll increase the number of senior alignment researchers (or at least "people who are capable of mentoring new alignment researchers, starting new orgs, leading teams, etc.).
  • My vibes-level guess is that the top junior alignment researchers are ready to lead teams within about a year or two of doing alignment research on their own. EG I expect some people in this post to be ready to mentor/advise/lead teams in the upcoming year. (And some of them already are).

I'm definitely feeling like I sacrificed both income and career capital by deciding to do alignment research full time. I don't feel like I'm being 'hurt' by the world though, I feel like the world is hurting. In a saner world, there would be more resources devoted to this, and it is to the world's detriment that this is not the case. I could go back to doing mainstream machine learning if I wasn't overwhelmed by a sense of impending doom and compelled by a feeling of duty to do what I can to help. I'm going to keep trying my best, but I would be a lot more effective if I were working as part of a group. Even just things like being able to share the burden of some of the boilerplate code I need to write in order to do my experiments would speed things up a lot, or having a reviewer to help point out mistakes to me.

Proposal: If other people are doing independent research in London I'd be really interested in co-working and doing some regular feedback and updates. (Could be elsewhere but I find being in person important for me personally). If anyone would be interested reply here or message me and we'll see if we can set something up :)

General comment: This feels accurate to me. I've been working as an independent researcher for the last few months, after 9 months of pure skill building and have got close but not succeeded in getting jobs at the local research orgs in London (DeepMind, Conjecture).

It's a great way to build some skills, having to build your own stack, but it's also hard to build research skills without people with more experience giving feedback, and because iteration of ideas is slow, it's difficult to know whether to stick with something or try something else.

In particular it forces you to be super proactive if you want to get any feedback.

I'm not in London, but aisafety.community (the afaik most comprehensive and way too unknown resource on AI safety communities) suggests the London AI Safety Hub. There are some remote alignment communities mentioned on aisafety.community as well. You might want to consider them as fallback options, but probably already know most if not all of them.

Let me know if that's at all helpful.

Cheers Severin yeah that's useful, I've not seen aisafety.community (almost certainly my fault, I don't do enough to find out what's going on).

That Slack link doesn't work for me though, it just asks me to sign into one of my existing workspaces..

Flagged the broken link to the team. I found this, which may or may not be the same project: https://www.safeailondon.org/

It's not the same thing; the link was broken because Slack links expire after a month. Fixed for now.

I 100% agree with you. 

I am a person entering the field right now, I also know several people in a position similar to mine, and there are just no positions for people like me, even though I think I am very proactive and have valuable experience

Yep, the field is sort of underfunded, especially after the FTX crash. That's why I suggested grantwriting as a potential career path.

In general, for newcomers to the field, I very strongly recommend booking a career coaching call with AI Safety Support. They have a policy of not turning anyone down, and quite a bit of experience in funneling newcomers at any stage of their career into the field. https://80000hours.org/ are also a worthwhile address, though they can't make the time to talk with everyone.

One thing I’d like to note is that the number of researchers is growing and we have less money than we had before. As someone who is a year into the field, my number one concern is to have funding over a long period so that I can have some stability to do my research. This is the thing that matters to me by far above all else. I’m concerned about all the new people coming into the field and hope they are in a position to get stable funding or a role at an alignment org.

Edit: note that I’m not saying it is a bad thing that the field is competitive and I’m not trying to imply that funding should be easy anyone to get. I’m mostly trying to bring up a point regarding our macro-strategy for alignment which I think should be considered. The ultimate goal is still to do what’s best for solving alignment.

[-]mic60

Do workshops/outreach at good universities in EA-neglected and low/middle income countries

Could you list some specific universities that you have in mind (for example, in Morocco, Tunisia, and Algeria)?

That's one of the suggestions of the CanAIries Winter Getaway where I felt least qualified to pass judgment. I'm working on finding out about their deeper models so that I (or them) can get back to you.

I imagine that anyone who is in a good position to work on this has existing familial/other ties to the countries in debate though, and already knows where to start.

Something I believe could also be helpful is to have a non-archival peer review system that helps improve the quality of safety writings or publications; and optionally makes a readable blog post etc.

LessWrong/Alignment forum essentially has this for users with >100 karma. If you have a draft you can click on the feedback button and ask for this kind of feedback.

That LW post review interface is a thin veneer, just making pre-publication feedback a tiny bit more convenient for some people (unless they already use Google Docs, Notion, or other systems to draft their posts which offer exactly the same interface).

However, this is not a proper, scientific peer review system that fosters better research collaboration and accumulation of knowledge. I wrote about this recently here.

This sounds like you never used the feedback button.

If you press the feedback button you used to get a text:

I'm glad you're interested in getting feedback on your post. Please type a few words here about what kind of feedback you'd like to receive.

Things we can provide:

  • Proofreading and editing from a professional copy editor.
  • Feedback on the coherence and clarity of your ideas.
  • Feedback from domain experts/relevant peers about your post.
  • Other stuff!

Lightcone pays a person to provide this service.

Hey Severin, I'm happy to say that the idea of a maintained database of AIS field-building ideas has now been implemented at AISafety.com/projects. I'm considering adding your ideas here to the database but since this post is quite old I'm wondering whether there's any notable changes you would make to the list?