In 2009 EY asked "What's the craziest thing the AI could tell you, such that you would be willing to believe that the AI was the sane one?"

rhollerith_dot_com responded "That the EV of the humans is coherent and does not care how much suffering exists in the universe."

Vassar responded to this with the scariest thing I've read on LessWrong which was:

"But you believe that, don't you? I certainly place a MUCH higher probability on that than on the sort of claims some people have proposed."

Do you agree with Vassar's reply?

Vassar's purpose with the first of the two sentences you quote is to point out that I was playing the game wrong. Specifically, the mere fact that I was replying with something to which I had already assigned significant probability before starting the exercise was evidence to Vassar that I had not properly grasped the spirit of the exercise.

The second sentence of the quote can be interpreted as a continuation of the theme of "You're playing the game wrong, Hollerith," if as seems likely to me now, Vassar saw the purpose (or one of the purposes) ... (read more)

Q&A #2 with Singularity Institute Executive Director

by lukeprog 1 min read13th Dec 201148 comments

9


Just over a month ago I posted a call for questions about the Singularity Institute. The reaction to my video response was positive enough that I'd like to do another one — though I can't promise video this time. I think that the Singularity Institute has a lot of transparency "catching up" to do.

 

The Rules (same as before)

1) One question per comment (to allow voting to carry more information about people's preferences).

2) Try to be as clear and concise as possible. If your question can't be condensed into one paragraph, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

3) I will generally answer the top-voted questions, but will skip some of them. I will tend to select questions about the Singularity Institute as an organization, not about the technical details of some bit of research. You can read some of the details of the Friendly AI research program in my interview with Michael Anissimov and in Eliezer's Singularity Summit 2011 talk.

4) Please provides links to things referenced by your question.

5) This thread will be open to questions and votes for 7 days, at which time I will decide which questions to begin preparing responses to.

 

I might respond to certain questions within the comments thread; for example, when there is a one-word answer to the question.

You may repeat questions that I did not answer in the first round, and you may ask follow-up questions to the answers I gave in round one.