From London, now living in Mountain View.
Thanks, that's useful. Sad to see no Eliezer, no Nate or anyone from MIRI or having a similar perspective though :(
Don't let your firm opinion get in the way of talking to people before you act. It was Elon's determination to act before talking to anyone that led to the creation of OpenAI, which seems to have sealed humanity's fate.
This is true whether we adopt my original idea that each board member keeps what they learn from these conversations entirely to themselves, or Ben's better proposed modification that it's confidential but can be shared with the whole board.
Perhaps this is a bad idea, but it has occurred to me that if I were a board member, I would want to quite frequently have confidential conversations with randomly selected employees.
For cryptographic security, I would use HMAC with a random key. Then to reveal, you publish both the message and the key. This eg allows you to securely commit to a one character message like "Y".
Extracted from a Facebook comment:
I don't think the experts are expert on this question at all. Eliezer's train of thought essentially started with "Supposing you had a really effective AI, what would follow from that?" His thinking wasn't at all predicated on any particular way you might build a really effective AI, and knowing a lot about how to build AI isn't expertise on what the results are when it's as effective as Eliezer posits. It's like thinking you shouldn't have an opinion on whether there will be a nuclear conflict over Kashmir unless you're a nuclear physicist.