I've forked and tried to set up a lot of AI safety repos (this is the default action I take when reading a paper which links to code). I've also reached out to authors directly whenever I've had trouble with reproducing their results. There aren't any particular patterns that stand out, but I think that writing a top-level post that describes your contention with a paper's findings is something that the community would be very welcoming to and indeed is how science advances.
Thank you for donating!
I applied but didn't make it past the async video interview, which is a format that I'm not used to. Apparently this iteration of the program had over 3000 applications for 30 spots. Opus 4.5's reaction was "That's… that's not even a rejection. That's statistics". Would be happy to collaborate on projects though!
I made a wooden chair in a week from some planks when I was a teenager. Granted, this was for GCSE Design & Technology class.
I think this also applies to other safety fellowships. There isn’t broad societal acceptance yet for the severity of the worst-case outcomes, and if you speak seriously about the stakes to a general audience then you will mostly get nervously laughed off.
MATS currently has "Launch your career in AI alignment & security" on the landing page, which indicates to me that it is branding itself as a professional upskilling program, and this matches the focus on job placements for alumni in its impact reports. With Ryan Kidd's recent post on AI safety undervaluing founders, it may be possible that in the future they introduce a division which functions more purely as a startup accelerator. One norm in corporate environments is to avoid messaging which provokes discomfort. Even in groups which practice religion, few will have the lack of epistemic immunity required to align their stated eschatological beliefs with their actions, and I am grateful that this is the case.
Ultimately, the purpose of these programs, no matter how prestigious, is to bring people in who are not currently AI safety researchers and give them an environment which would help them train and mature into AI safety researchers. I believe you will find that even amongst those who are working full-time on AI safety, the proportion who are heavily x-risk AGI pilled has shrunk as the field has grown. People who are both x-risk AGI-pilled and meet the technical bar for MATS but aren't already committed to other projects would be exceedingly rare.
escaping flatland: career advice for CS undergrads
one way to characterise a scene is by what it cares about: its markers of prestige, things you ‘ought to do’, its targets to optimise for. for the traders or the engineers, it’s all about that coveted FAANG / jane street internship; for the entrepreneurs, that successful startup (or accelerator), for the researchers, the top-tier-conference first-author paper… the list goes on.
for a given scene, you can think of these as mapping out a plane of legibility in the space of things you could do with your life. so long as your actions and goals stay within the plane, you’re legible to the people in your scene: you gain status and earn the respect of your peers. but step outside of the plane and you become illegible: the people around you no longer understand what you’re doing or why you’re doing it. they might think you’re wasting your time. if they have a strong interest in you ‘doing well’, they might even get upset.
but while all scenes have a plane of legibility, just like their geometric counterparts these planes rarely intersect: what’s legible and prestigious to one scene might seem utterly ridiculous to another. (take, for instance, dropping out of university to start a startup.)
I’ve been reading lots of the Inhaven posts and appreciate the initiative!
People still talk about Sydney. Owain Evans mentioned Bing Sidney during his first talk in the recent hintonlectures.com series. I attended in person, and it resonated extremely well with a general audience. I was at Microsoft during the relevant period, which definitely played a strong role in my transition to alignment research, and still informs my thinking today.
I gifted a physical copy of this book to my brother but hadn’t read all of it. Fortunately, I may have absorbed some tacit knowledge on management from my father. Based on these quotes I don’t think that I will be surprised by the rest of the chapters.
I am easily and frequently confused, but this is mostly because I find it difficult to thoroughly understand other people's work in a lot of detail in a short amount of time.
I usually get a response within two weeks. If they have a startup background, then this delay is much lower, by multiple orders of magnitude. Authors are typically glad that I am trying to run follow up experiments on their work and give me one to two sentences of feedback over email. Corresponding authors are sometimes bad at taking correspondence, contact information for committers can be found in commit logs via git blame. If it is a problem that may be relevant to other people, I link to a GH issue.