LESSWRONG
LW

AI Alignment FieldbuildingAI Safety Mentors and Mentees ProgramCenter for Human-Compatible AI (CHAI)Conjecture (org)Human-AI SafetyAI
Personal Blog

11

Launching Applications for the Global AI Safety Fellowship 2025!

by Aditya_SK
30th Nov 2024
2 min read
5

11

AI Alignment FieldbuildingAI Safety Mentors and Mentees ProgramCenter for Human-Compatible AI (CHAI)Conjecture (org)Human-AI SafetyAI
Personal Blog

11

Launching Applications for the Global AI Safety Fellowship 2025!
4Terence Coelho
2Aditya_SK
3habryka
1Aditya_SK
2Terence Coelho
New Comment
5 comments, sorted by
top scoring
Click to highlight new comments since: Today at 2:36 PM
[-]Terence Coelho9mo40

I notice this is downvoted and by a new user. On the surface, it looks like something I would strongly consider applying to, depending on what happens in my personal life over the next month. Can anyone let me know (either here or privately) if this is reputable?

Reply
[-]Aditya_SK9mo20

Hi,

It was quite strange to see it downvoted, and I’m not sure what the issue was. My guess is that the initial username might have played a role, especially since this is my first post on LessWrong, it might have caused some concern maybe?

As for the credibility, you can see that this fellowship has been shared by individuals from the organizations themselves on Twitter, as seen here, here and here.

If you’d like, I’m happy to discuss this further on the call to help alleviate any concerns you may have.

Reply
[-]habryka9mo30

The post feels very salesy to me, was written by an org account, and also made statements that seemed false to me like: 

1⃣ Fellows will work with the world’s leading AI safety organisations to advance the safe and beneficial development of AI. Some of our placement partners are the Center for Human Compatible AI (CHAI), FAR.AI, Conjecture, UK AISI and the Mila–Quebec AI Institute.

(Of those, maybe Far.AI would be deserving of that title, but also, I feel like there is something bad about trying to award that title in the first place). 

There also is no disambiguation of whether this program is focused on existential risk efforts or near-term bias/filter-bubble/censorship/etc. AI efforts, the latter of which I think is usually bad for the world, but at least a lot less valuable.

Reply
[-]Aditya_SK8mo10

Thanks for the feedback! Quite helpful to get more context.
Quick responses:
1) Yes, we did intend for the hook to be eye-grabbing/mildly salesy, as this is part of our promotional material shared across different platforms, and we were hoping this would be effective at garnering the interest of talented individuals and encourage them to work on AIS. Though we didn't think it was dishonest/false, instead we designed it be short but effective. 
2)It was a sincere mistake that the post was made from an org account.
3) We missed gauging issues with the use of 'leading AI Safety organisations' , however, I think you are right. We could have been more cautious in how we framed it.
4) We have taken a note of stating the scope of our efforts and intend to factor this into consideration when designing our next outreach framing.

Thanks for your inputs!
 

Reply
[-]Terence Coelho9mo20

Just commenting to say that this is convincing enough (and the application sufficiently low-effort) for me to apply later this month, conditional on being in a position where I could theoretically accept such an offer.

Reply
Moderation Log
More from Aditya_SK
View more
Curated and popular this week
5Comments

TLDR; Applications are accepted until December 31, 2024 on a rolling basis, to join a 3-6 month, fully-funded research program in AI safety. Fellows work with some of the world’s leading AI safety labs and research institutions. This will be a full-time, in-person placement, after which there may be opportunities to continue the engagement full time based on mutual fit and fellow's performance. 

Learn more and apply to be a fellow here, or refer someone you think would be awesome for this here. We’re also looking for Talent Identification Advisors (or Consultants) – find out more about the role here.
—
Impact Academy is an organisation focused on running cutting-edge fellowships to enable global talent to use their careers to contribute to the safe and beneficial development of AI.

Impact Academy’s Global AI Safety Fellowship is a 3-6 month fully-funded research program for exceptional STEM talent worldwide. 

1⃣ Fellows will work with leading AI safety organisations to advance the safe and beneficial development of AI. Some of our placement partners are the Center for Human Compatible AI (CHAI), FAR.AI, Conjecture, and the UK AISI.

Applications are being accepted on a rolling basis until 31 December 2024, but early applications are strongly encouraged. The exact start date of the Fellowship will be decided by the candidate and the placement organization.

Fellows will work in person with partner organisations, subject to visa. If fellows experience visa delays, we will enable them to work from our shared offices at global AI safety hubs.

Ideal candidates for the program will have-

  • Demonstrated programming proficiency (e.g. >1 year of relevant professional experience).
  • A strong background in ML (e.g. full-semester university courses, significant research projects, or publications in ML).
  • A track record of excellence (e.g. outstanding achievements in academics or other areas).
  • An interest in pursuing research to reduce the risks from advanced AI systems.

Please apply even if you do not meet all qualifications! Competitive candidates may excel in some areas while developing in others.

Fellows will receive a comprehensive financial package to cover their salary, living expenses and research costs, along with dedicated resources for building foundational knowledge in AI safety, regular mentorship, and 1:1 coaching calls with the Impact Academy team. Fellows who perform well would have reliable opportunities to continue working full-time with their placement org.

To learn more and apply, visit our website.

Know someone who would be a good fit? Refer them through this form. There is a $2,000 reward to anyone who refers a candidate that gets selected for placement.

For any queries, please reach out at aisafety@impactacademy.org.

Apply Now!