A lot of this is fairly relatable. Thanks for writing it.
I just received my BSc in CS with Math minor on Nov 12th. As a Canadian living in Canada I do feel some worry that my time could have been better spent just focusing directly on AIA, however this is a less socially legible decision to family who do not (or at least did not, when I started my degree) believe in the immediate danger of AI.
I do think a lot of the math was valuable, much less of the CS content seemed valuable, although I found the CS content a great deal easier making it faster to finish my degree.
I also took electives I found fun, such as an intro to Japanese, because since I have unknown, possibly very short timelines and pretty high p(doom), I'm trying to split my time between contributing to the AIA effort and enjoying what time I have left.
I engaged with the AI club at my university, hoping to foster AIA focus. In retrospect, I think this was mostly a mistake and my efforts would have been better spent promoting PauseAI, as I began to do in my final year, or merely engaging with classes and AIA topics without other social engagement. Maybe if I was more naturally social or if I had more extra time to focus on campaigning it would have been more successful, but it seemed like most people in the AI club already had their minds made up about AI and saw ML as an important career skill, not a threat to humanity.
Now that I'm finished I'm living off of some savings for a time and trying to figure out what to do next and how to engage meaningfully with AIA. I have been applying to fellowships and am also trying to publish a self study journal to keep myself focused working on understanding more about AIA and further developing my research and writing skills.
Any advice you might have for me would be appreciated.
the payoff of additional knowledge is distributed over a large number of years ... if you expect your career to last less than a decade ... each difficult course takes entire percentage points away from your remaining productive thinking time
Depends on the balance between how useful work of median vs. outlier quality is for your level of talent (and how much ability to carry out that work depends on your position, such as the state of your career), so it can make sense to maximize probability of occasional outlier outputs. In which case spending half of all of your time studying obscure theory of uncertain relevance might be the way to go, and college years certainly won't be presenting enough of this, as it's feasible to end up understanding much more than you are capable of learning in a few years.
When I started my freshman year, my median estimate for AGI was 20 years. In my senior year it was down to 3 years (although it’s gone back up to 5 years since then). My expectations of the future made my college experience somewhat unusual and I will share some reflections as someone who recently graduated.
I came into college wanting to minimize existential risks, from the simple fact that AGI is likely to happen this century and biological weapons and nuclear war could cause catastrophes even if AGI doesn’t happen.
College is usually a time when people mature, take steps towards finding their place in the world, and start their 40-year careers. For my friends, it is a time when they are faced with the creation of superintelligence and the potential loss of everything they care about.
There is a banality many of us have felt where you spend one hour building skills to eventually help prevent AGI from destroying humanity, and the next hour you’re doing homework for a useless class you’re forced to take for your degree. One conversation is about whether you should carry around iodine in case tensions spiral out of control during AGI takeoff, and another is about which dining halls are infested with mice. In a sense, we’re inhabiting two worlds at once – a world where we’re normal college students, and a world where AGI might be built by the end of the decade.
I found it sanity-preserving to keep in mind that if I was forced to take some classes, then they weren’t useless. There was the option of failing the classes, but that wouldn’t have been helpful for achieving my goals (because as an international student, not getting a degree would have made my life much harder). So, it was actually good for the world for me to pass my classes.
Usually, people go into college expecting a 40-year career. Under 40-year planning horizons, it makes sense for a STEM student to take a broad range of difficult courses, as (1) the payoff of additional knowledge is distributed over a large number of years, and (2) there is significant uncertainty about which knowledge will be useful in your career.
This changes significantly if you expect your career to last less than a decade. At that point, each difficult course takes entire percentage points away from your remaining productive thinking time. If there is no clear way for that course to help your short career, then it’s almost equivalent to throwing entire percentage points of your future impact out the window (and not even in a fun way).
I was triaging my time intensely when thinking about whether to take very time-consuming problem set classes like a CS class in operating systems or a theory-focused statistics class. I ultimately converged on taking no time-consuming courses (except obviously useful ones like linear algebra and introductory statistics), and focusing the saved time on grad-student-supervised AI safety research, field building, and part-time work in AI safety.
I think I overcorrected by a bit. I now regret not having taken somewhat more advanced statistics courses, as my current work can get pretty statsy at times. But aside from that, I think my course choices were close to optimal.
When picking a major, my main consideration was “how much time does this free up for me to do AI safety research”. The CS and statistics degrees were disqualified because of their multiple time-consuming requirements that don’t seem to contribute to skills relevant to AI safety.
I eventually converged on designing my own major through Harvard’s Special Concentrations Department and I estimate this resulted in hundreds of hours of additional time spent on AI safety. I’m extremely grateful to the department for enabling me to do this. If I was at a different university, I probably would have done something like Applied Math which gets a good mix of resume credibility and not having super time-consuming requirements, or maybe I would have done CS.
I think it’s likely I would have dropped out and started doing full-time AIS upskilling or work by junior year if I wasn’t an international student. As it stands, I was basically forced to get a bachelor’s degree if I wanted to stay in the US and make AGI go well.
The number of students dropping out or taking leaves of absence because of AGI has been slowly increasing. In Boston, the number is around 10 over the last 3 years, and I expect many more over the next few years.
When my friends would ask me whether they should drop out, I would always ask “Drop out to do what?” If the answer was “start a full-time role in AI safety”, I would say “Hell yeah, go for it!” But if the answer was “I don’t really know, upskill on my own and look for a job?” then I’d hesitate. I think school is a great waiting room, as you can upskill and keep applying to roles over and over without having a hole in your resume. Once you land a full-time role or a spot in a research program like MATS or Astra, then it seems great to do those things instead.
In my freshman year, there were very few people on campus who took AGI seriously. This meant that the few people who were thinking about it had very few people to talk to. I sadly suspect this is still true for most universities.
Luckily this changed during my time in college, and Boston now has a very large ecosystem of students who are considering working on AI safety (more than 100). It’s still far from where it would be in a sane civilization – AI safety warrants a mass mobilization of the world’s greatest minds like in WWII. The students at Harvard and MIT who are seeing the stakes clearly and contributing to AI safety despite the inaction from the broad academic community brought joy to my life.
My friend group formed around intensely pursuing a set of shared goals, centered around making AGI go well. We kept each other on-track and helped each other get into the job positions we were aiming for. I now look back at the tight-knit group of friends and I feel extremely grateful for having those people in my life.
I found having a strong friend group focused on AI safety careers extremely valuable, and I think it’s the reason the Boston AI safety student ecosystem is arguably the most successful in the world.
For students navigating the pre-singularity alone, my strong recommendation is to find other students who also want to make AGI go well, and stick together. It is uniquely sanity-preserving to hang out with people who realize that AGI is likely in the coming decades.
And don’t just stick together — keep each other accountable to actually steer the future in a good direction. Often times, just having someone to talk to makes it much more likely that someone will apply to MATS or take the right course or apply to the right job. One of the most impactful 10 seconds of my life was writing a quick DM telling my friend to apply to MATS.
Given the state of the world, my college experience was basically the best college experience possible thanks to all my wonderful friends who were also navigating these crazy times. Creating an environment where students can help each other make AGI go well seems like one of the most impactful things current students could be doing.