[CS 2881r] [Week 3] Adversarial Robustness, Jailbreaks, Prompt Injection, Security
This is the third blog post for Boaz Barak’s AI Safety Seminar at Harvard University. I intended to condense the lecture into an easily readable format as much as possible. Author Intro: Hello to everyone reading this! I am Ege, a Junior at Harvard, studying Statistics and Physics with an...