Hi again, I'm back with the second episode covering my interview  with Dr. Peter Park, an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. Dr. Park was a cofounder of StakeOut.AI, a non-profit focused on making AI go well for humans, along with Harry Luk and one other individual, whose name has been removed due to requirements of her current position.

Unfortunately, due to funding pressures, the organization recently had to dissolve, but the founders continue to contribute positively towards society in their respective roles.

Specifically, this episode focuses on what Dr. Park calls the battlegrounds in the effort for making AI go well. In addition we talk about Rich Sutton, the OpenAI drama from November 2023, and I unfortunately make the podcast's first mention of Elon Musk or related products.

This interview was made possible through the 2024 Winter AI Safety Camp.

Note that the interview will be broken up into 3 episodes, and this is only the second; the third will be released next week.

As I have mentioned previously, any feedback, advice, comments, etc. is greatly appreciated.

Spotify
Apple Podcasts
Amazon Music
YouTube Podcasts

New to LessWrong?

New Comment