Why don't you introduce really impressive people you personally know to AI alignment (more often)?
I have a few questions to the subset of readers who: 1. Believe technical AI alignment research is both important and hard to make significant progress in 2. Have a personal connection with a person who doesn't know much about AI alignment, but who you think would have a real...
It's funny that this post has probably made me feel more doomy about AI risk than any other LW post published this year. Perhaps for no particularly good reason. There's just something really disturbing to me about seeing a vivid case where folks like Jacob, Eli and Samotsvety, apparently along with many others, predict a tiny chance that a certain thing in AI progress will happen (by a certain time), and then it just... happens.