Robert Miles has been making educational videos about AI existential risk and AI alignment for 4+ years. I've never spoken with him, but I'm sure he has learned a lot about how to communicate these ideas to a general audience in the process. I don't know that he has compiled his learnings on that anywhere, but it might be worth reaching out to him if you're looking to talk with someone who has experience with this.
Another resource - Vael Gates and Collin Burns shared some testing they did in Dec 2022 on outreach to ML researchers in What AI Safety Materials Do ML Researchers Find Compelling? . However, I should emphasize this was testing outreach to ML researchers rather than the general public. So it may not be quite what you're looking for.