The current cohort of the ML Alignment & Theory Scholars Program, MATS 6.0, had a unique application process and its broadest selection of mentors yet, with 40 mentors to apply to. I was invited to interview with twelve mentors and was accepted by five (which I later learned was an...
This report explores the potential of large language models (LLMs) to enhance biosecurity. We conducted interviews with nine biosecurity experts to understand their daily tasks, and how LLMs could be more useful for their work. Our findings indicate that approximately 50% of our interviewees’ biosecurity-related tasks, such as gathering information...
I'm sharing a paper I previously presented to the Stanford Existential Risks Conference in April 2023. In past years, much discussion of making the long-term future go well has focused on AI alignment and mitigating extinction risks. More recently, there has been more exploration of how this may fail to...
The Gradient is a “digital publication about artificial intelligence and the future,” founded by researchers at the Stanford Artificial Intelligence Laboratory. I found the latest essay, “The Artificiality of Intelligence,” by a PhD student at UC Berkeley, to be an interesting perspective from AI ethics/fairness. Some quotes I found especially...
> Several tech leaders descended upon Capitol Hill last week to discuss the rapid expansion of generative AI. It was a mostly staid meeting until the potential harms from Meta's new Llama 2 model came up. > > During the discussion, attended by most of the Senate's 100 members, Tristan...
1. For advisors: submit project proposals by September 18th. 2. For students: apply by September 29th. The Supervised Program on Alignment Research (SPAR) is an intercollegiate project-based research program for students interested in AI safety running this fall. Organized by groups at UC Berkeley, Georgia Tech, and Stanford, SPAR matches...
In 2022 and 2023, there has been a growing focus on recruiting talented individuals to work on mitigating the potential existential risks posed by artificial intelligence. For example, we’ve seen an increase in the number of university clubs, retreats, and workshops dedicated to introducing people to the issue of existential...