TLDR: Just pick up the phone. Agree on permissions for information before you share it (see Step 4).
I have worked as a professional journalist covering AI for over a year, and during that time, multiple people working in AI safety have asked me for advice on engaging with journalists. At this point, I've converged on some core lessons, so I figured I should share them more widely.
I've also reviewed some of the prior art about how to talk to journalists on LessWrong and found it unsatisfying. The answer to the question often seems to be "Don't."
Unsurprisingly then, I think many people feel like they are not prepared to talk to journalists. They... (read 3082 more words →)
AGI Should Have Been a Dirty Word
Epistemic status: passing thought.
It is absolutely crazy that Mark Zuckerberg can say that smart glasses will unlock personal superintelligence or whatever incoherent nonsense and be taken seriously. That reflects poorly on AI safety's comms capacities.
Bostrom's book should have laid claim to superintelligence! It came out early enough that it should have been able to plant its flag and set the connotations of the term. It should have made it so Zuckerberg could not throw around the word so casually.
I would go further, and say that the early safety writing on AGI should have been enough that the labs were too scared to say in public in... (read more)