Applications have opened for the Summer 2023 Cohort of the SERI ML Alignment Theory Scholars Program! Our mentors include Alex Turner, Dan Hendrycks, Daniel Kokotajlo, Ethan Perez, Evan Hubinger, Janus, Jeffrey Ladish, Jesse Clifton, John Wentworth, Lee Sharkey, Neel Nanda, Nicholas Kees Dupuis, Owain Evans, Victoria Krakovna, and Vivek Hebbar....
TLDR: I'm launching Good Futures Initiative, a winter project internship to sponsor students to take on projects to upskill, test their fit for career aptitudes, or do impactful work over winter break. You can read more on our website and apply here by December 11th if interested! Good Futures Initiative...
When I was first introduced to AI Safety, coming from a background studying psychology, I kept getting frustrated about the way people defined the and used the word "intelligence". They weren't able to address my questions about cultural intelligence, social evolution, and general intelligence in a way I found rigorous...
At EA UC Berkeley, we’re launching an ongoing series of contests called the Artificial Intelligence Misalignment Solutions (AIMS) series. This third contest, Edit Your Source Code, is an AI Safety sci-fi creative writing contest now open to any student (high school, undergrad, grad): here are our interest and submission forms!...
This post: * Announces the winning submissions to the Distillation Contest * Gives further insight into the scoring process for the contest * Examines the effectiveness of our advertising strategies * Gives a brief impact estimate of the contest * Shares my advice for community builders who are planning to...
At EA UC Berkeley, we’re launching an ongoing series of contests called the Artificial Intelligence Misalignment Solutions (AIMS) series. This second contest, the Distillation Contest, is now open to any student enrolled in a university/college: here are our interest and submission forms! The contest has prizes as large as $2,500...