along with "i'm not sure how i'd get paid (enough)", "i don't think i'm qualified" is the foremost reason i hear people who think AI alignment is important give for why they're not doing technical AI alignment research themselves. here are some arguments as to why they might be wrong.

AI alignment researchers have a lot of overall confusion. the field of AI safety has 70 to 300 people depending on who you ask/how you count, and most of them are doing prosaic research, especially interpretability, which i don't think is gonna end up being of much use. so the number of people working in the field is small, and the number of people contributing helpful novel stuff is even smaller.

i'm bad at math. i'm worse at machine learning. i just have a bachelor's in compsci, and my background is in software engineering for game development. i've only been working on AI alignment seriously since last year. yet, i've come up with a variety of posts that are helpful for alignment, at least in my opinion — see for example 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.

as is said in some of the recommended resources at the bottom of my intro to AI doom and alignment, such as the alignment research field guide or the "getting started in AI safety" talk, it is important to do backchaining: look at the problem and what pieces you think would be needed to solve it, and then continue backwards by thinking about what you need to get those pieces. it's also important to just think about the problem and learn things only as you actually need them — you should not feel like if instead you have a whole pile of posts/books/etc you have to learn before thinking about solutions to the problem; you risk wasting time learning stuff that isn't what's useful to you, and you also risk losing some of your diversity value — something that i believe is still sorely needed, given how hopeless existing approaches are.

the field is small, the bar for helping is low, and alignment researchers are confused about many things. if you think you're not qualified enough to make useful contributions to technical alignment research, there's a good chance you're wrong.

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 10:35 AM

sorry, i don't know, because i haven't done such skilling up myself apart from learning a few things here and there as needed.

Feedback: when I read this post title and the title of "You are probably not a good alignment researcher, and other blatant lies", I felt a little ashamed. I dropped out of high school before learning how to use the quadratic formula, Fizzbuzz is the outer limit of my programming ability, and I have a panic reaction to math and CS which has made improving these skills in adulthood intractable. I think I am not qualified to do technical alignment research.

Reading through both posts, I acknowledge that they're hedged enough to account for the fact that some people aren't good alignment researchers. But I think small changes to the titles and internal post phrasing would leave me feeling less desire to step in with a "well, actually". I don't know the costs of these changes very well - sincerely, if LessWrong is a forum primarily for people who have at least a BA in CS or equivalent knowledge, then my request is overstepping. But if the intended audience is more general, then it would be an improvement to make the intended scope of "people who think they aren't qualified to do alignment research, who may actually be so qualified" more clear.

that's very reasonable feedback, thanks for posting it. i wrote the post with more of a "have you considered that you're wrong?" mindset, to which "yes i have considered it and i'm not" is a perfectly reasonable response, but maybe i failed to convey that vibe.

that said, given the hedging and my assumptions about the distribution of LW readers, i'm not sure i wanna change much about the post as it is.

as is said in some of the recommended resources at the bottom of my intro to AI doom and alignment

This link is broken

here's the correct link https://www.lesswrong.com/posts/T4KZ62LJsxDkMf4nF/a-casual-intro-to-ai-doom-and-alignment-1

thanks! fixed.

More like "I'm not qualified yet". There's more maths, CS and ML I need to learn first. I do encounter stumbling blocks due to insufficient technical maturity when working on some alignment posts.