No LLM generated, assisted/co-written, or edited work.
Read full explanation
# PodSearch — Semantic search for AI safety podcasts
I built a search tool specifically for AI safety and alignment content.
**What it does:**
Search across 174 hours, 181 episodes, and 20,584 conversation moments from podcasts like Lex Fridman, Dwarkesh Patel, 80,000 Hours, Future of Life Institute, and others. Instead of finding the episode, it takes you to the exact timestamp where an idea is discussed.
**Curated concepts:**
17 manually curated concepts (corrigibility, deceptive alignment, mesa optimization, interpretability, existential risk, treacherous turn, and more) — each with selected perspectives and gold clips from the best conversations in the corpus.
This is a solo project and still early. I'd genuinely appreciate feedback — what's missing, what's broken, what would make this actually useful for your work?
# PodSearch — Semantic search for AI safety podcasts
I built a search tool specifically for AI safety and alignment content.
**What it does:**
Search across 174 hours, 181 episodes, and 20,584 conversation moments from podcasts like Lex Fridman, Dwarkesh Patel, 80,000 Hours, Future of Life Institute, and others. Instead of finding the episode, it takes you to the exact timestamp where an idea is discussed.
**Curated concepts:**
17 manually curated concepts (corrigibility, deceptive alignment, mesa optimization, interpretability, existential risk, treacherous turn, and more) — each with selected perspectives and gold clips from the best conversations in the corpus.
**Try it here:** https://bardoonii-podsearch-alignment.hf.space
Example searches that work well:
- "deceptive alignment"
- "Paul Christiano takeoff"
- "what is RLHF"
- "corrigibility"
This is a solo project and still early. I'd genuinely appreciate feedback — what's missing, what's broken, what would make this actually useful for your work?