In my occasional advising calls with aspiring AI Safety folks, one of the most common questions I get is “What courses should I take next?” I often find myself replying: “None; go do stuff instead.”
Fabricando fit faber. By making, one becomes a maker.
There are a lot of courses in AI safety and governance. I’ve helped teach a few. Some are quite good! But after the tenth or twentieth person tells me “I’ve read If Anyone Builds It, Everyone Dies and taken BlueDot’s AGI Strategy and the CAIS AI Safety course, I’m not sure whether I should do ARENA or apply to MATS…” I start to notice a pattern. I want to take them by the shoulders and say “THOSE ARE NOT YOUR ONLY CHOICES.”
Look. MATS is pretty cool. Many of their graduates do good work. It’s also, AFAIK, swamped with promising applicants and eager young ML engineers.
If you think MATS sounds like a good fit for you, sure, you should probably apply. And then, while your keyboard is still cooling from the red-hot fire of your earnest expression of interest, you should turn that incandescent determination towards something you can do right now.
You know, the Thing. The Thing you’ve been thinking about doing, the Thing that leapt out of your mind as an important step while you were worrying about AI, the Thing that perhaps you’ve been putting off until you “have learned more” or “feel more ready”.
This policy memo explores the potential of Open-Source Intelligence (OSINT) for the enforcement of the EU AI Act. Comparing the monitoring of AI development to the existing OSINT efforts in nuclear nonproliferation, this memo advocates for civil society organisations supporting compliance with the EU AI Act through OSINT. Despite its limitations, OSINT is a useful method to support the regulation of especially high-risk systems and it should become a part of the AI policy enforcement toolbox…
This short post was written as a final project for an AI safety course. It took a decent chunk of time—about a month of part-time work and some unknown-to-me number of hours on the part of the author. It makes a straightforward point about a narrow subject: Open-source intelligence can help enforce AI policy. It’s niche, it’s not perfect, but it exists. The author did some research, made a case, cited a few sources, and moved on.
If you’re reading this, you probably can too. There are plenty of open problems in policy or technical governance or, if you’re focused on machine learning, interpretability.
Pick a topic that seems you-shaped. Set a deadline measured in weeks, not months. Make something exist.
You do not need twenty weeks of online courses or a Ph.D. in machine learning to become an Officially Licensed Person Whose Opinions Matter. You can just make things.
They will not necessarily be good things, right away. But the act of making a Thing will make you more the sort of person who makes that sort of Thing. And you can point to the Thing later, when you are applying for MATS or CAIS or UKAISI or wherever, and say, “I did this Thing on my own initiative, because I care. I want to do more Things like this, but with you [for money].”
This reminds me strongly of Stop Applying And Get To Work . If this post speaks to you, check that one out too; it's almost as short and makes slightly different points.
In my occasional advising calls with aspiring AI Safety folks, one of the most common questions I get is “What courses should I take next?” I often find myself replying: “None; go do stuff instead.”
Fabricando fit faber. By making, one becomes a maker.
There are a lot of courses in AI safety and governance. I’ve helped teach a few. Some are quite good! But after the tenth or twentieth person tells me “I’ve read If Anyone Builds It, Everyone Dies and taken BlueDot’s AGI Strategy and the CAIS AI Safety course, I’m not sure whether I should do ARENA or apply to MATS…” I start to notice a pattern. I want to take them by the shoulders and say “THOSE ARE NOT YOUR ONLY CHOICES.”
Look. MATS is pretty cool. Many of their graduates do good work. It’s also, AFAIK, swamped with promising applicants and eager young ML engineers.
If you think MATS sounds like a good fit for you, sure, you should probably apply. And then, while your keyboard is still cooling from the red-hot fire of your earnest expression of interest, you should turn that incandescent determination towards something you can do right now.
First of all, have you contacted your representative to tell them your thoughts on AI? No? Go do that!
Second, have you looked at other action pages for inspiration?
Third, have you considered Just Doing The Thing?
You know, the Thing. The Thing you’ve been thinking about doing, the Thing that leapt out of your mind as an important step while you were worrying about AI, the Thing that perhaps you’ve been putting off until you “have learned more” or “feel more ready”.
Verily I say unto thee: Thou’rt probably ready.
Here’s a good example of a Thing: How can OSINT be used for the enforcement of the EU AI Act?
This short post was written as a final project for an AI safety course. It took a decent chunk of time—about a month of part-time work and some unknown-to-me number of hours on the part of the author. It makes a straightforward point about a narrow subject: Open-source intelligence can help enforce AI policy. It’s niche, it’s not perfect, but it exists. The author did some research, made a case, cited a few sources, and moved on.
If you’re reading this, you probably can too. There are plenty of open problems in policy or technical governance or, if you’re focused on machine learning, interpretability.
Pick a topic that seems you-shaped. Set a deadline measured in weeks, not months. Make something exist.
You do not need twenty weeks of online courses or a Ph.D. in machine learning to become an Officially Licensed Person Whose Opinions Matter. You can just make things.
They will not necessarily be good things, right away. But the act of making a Thing will make you more the sort of person who makes that sort of Thing. And you can point to the Thing later, when you are applying for MATS or CAIS or UKAISI or wherever, and say, “I did this Thing on my own initiative, because I care. I want to do more Things like this, but with you [for money].”
Go forth and do the Thing.