Background

Reading Death With Dignity combined with recent advancements shortening my timeline has finally made me understand on a gut level that nature is allowed to kill me. Because of this alignment has gone from "An interesting career path I might pursue after I finish studying" to "If I don't do something I'll die while there was something else I could have done."

A big part of writing this is to get tailored advice, so a bit about my current skills that could be useful to the cause

  • Intermediate programmer, haven't done much stuff in AI, but I'm confident I can learn quickly. I'm good with Linux and can do sysadmin-y things.
  • Know a good amount of math, I've read textbooks on real analysis and linear algebra. With some effort, I think I can understand technical alignment papers though I'm far away from writing them.
  • I'm 17 and unschooled meaning I have nearly infinite free time and no financial obligations.

Babble

Inspired by Entering at the 11th Hour here's a list of some things I can do that may contribute (not quite "babble" but close enough)

  1. Learn ML engineering and apply to AI safety orgs. I believe I can get to level talked about in AI Safety Needs Great Engineers in a few months of focused practice.
  2. Deal with alignment research debt by summarizing results, making exposition, quiz games, etc.
  3. Help engineers learn math. Hold study groups, tutor etc.
  4. Help researchers learn engineering (I'm not an amazing programmer, but a lot of research code is questionable to say the least)
  5. Donate money to safety orgs. (will wait till I'm familiar with all the options, so I can weight based on likelihood of success)
  6. Host AI/EA/Rationality meetup. I live in the middle of nowhere, so this would be the only one
  7. Try to convince some young math prodigies that alignment is important (I've run into a few in math groups before)
  8. Make a website/YT/podcast debating AGI with people in order to convince people and raise awareness. (changing one person's carrier path is worth a lot)
  9. Lobby local politicians, see if anyone I know has connections and can put a word in
  10. Become active on LessWrong and ElutherAI in order to find friends who'll help along the way. Hard for me because of impostor syndrome right now (you don't want to know how long writing this post took).

Reflection

I most like (1), (2-4) and (6). (7) is something I'll do next time I have the chance.

I'm going to spend my working time studying the engineering needed to get hired at a safety org. If anyone here is good at programming and bad at math (or the converse) please contact me, I'd love to help (teaching helps me learn a subject too, so don't be shy).

Updates

Got accepted to Atlas after applying due to prompting in the comments, which progressed to me being more and more involved in the Berkeley rationalist scene (e.g. I stayed in an experimental group house for a month in October-November of 2022), and now I'm doing serimats under Alex Turner till March of 2023.

I still have a long way to go before aligning the AI, but I'm making progress :)

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 1:34 AM

If you're planning to study/teach math anyway, I've found that framing exercises are a really good 80/20 at getting people able to use mathematical concepts. However, it takes a fair bit of work to create a good framing exercise. So if you could create a bunch of those, I expect they'd be a fairly powerful tool for creating more competent researchers.

(Also, I have a post with a big list of useful-for-alignment math, almost all of which would benefit from lots of framing exercises.)

Thanks! I will definitely read those!

Read it, that study guide is really good, really motivates me to branch out since I've definitely overfocused on depth before and not done enough applications/"generalizing"

This also reminds me of Miyamoto Musashi's 3rd principle: Become acquainted with every art

You may want to apply for the Atlas Fellowship. There's also the AGI Safety Fundamentals course (may need to be 18).

Regarding 9, I'd suggest reaching out to CEA before engaging in any lobbying as if done poorly it can be counterproductive.

Thanks! Some other people recommended the Atlas Fellowship and I've applied. Regarding (9) I think I worded it badly, I meant reach out to local politicians (I thought the terms were interchangeable)

Even so, it's recommended to checkin with advice before contacting anyone important.

My thought, as a researcher who is pretty good at roughshod programming but not so good at rock-solid tested-everything programming, is that programming/engineering is big. Focusing on a specific aspect that is needed and also interesting to you might be advantageous, like supercomputing / running big spark clusters or security / cryptography.