Although I'm writing this about my personal situation and project, I expect others to be in this situation as well.

I'm thinking about joining ProjectX. It sounds like a perfect opportunity to work with AI, gain experience, and potentially make something I'm proud of. Unfortunately, it is about improving AI efficiency, which I read as "capabilities". When capabilities increase, doomsday gets nearer and more likely.

 

More information about the project: 

"ProjectX: ML Research Competition

ProjectX is the world’s largest undergraduate machine learning research competition with competing teams from top universities across the world. 3 winning teams will each be awarded a cash prize of CAD 20,000, and all participants will be invited to attend the annual University of Toronto AI Conference in mid January 2024. Last year's competition was sponsored by Google, Intel and Nvidia, and featured keynote speaker James Van Loon.

This year's focus is Efficient AI: investigating innovative methodologies and ML approaches to create AI models that significantly reduce power consumption without compromising performance.


 

We are fielding a team of 3-6 students to represent Northwestern this year. The team will work on their chosen problem from October to December 2023, culminating in a paper describing their findings.

More info about this year's competition

Sample paper from last year's winning team"

New to LessWrong?

New Answer
New Comment
2 comments, sorted by Click to highlight new comments since: Today at 12:44 PM
[-]shminux8mo2011

Your potential contribution to timeline shortening as an undergrad is probably negligible on the margins, unless you are in the top 1% of all applicants, or even then. The field is crowded by very intelligent, well paid and very determined people who do this for a living. So whatever considerations you have, they need not be influenced by the AGI killeveryoneism. 

Provided that you make sure you don't publish some massive capabilities progress—which I think is pretty unlikely for most undergrads—I think the benefits from having an additional alignment-conscious person with relevant skills probably outweighs the very marginal costs of tiny incremental capabilities ideas.