Vassar's purpose with the first of the two sentences you quote is to point out that I was playing the game wrong. Specifically, the mere fact that I was replying with something to which I had already assigned significant probability before starting the exercise was evidence to Vassar that I had not properly grasped the spirit of the exercise.

The second sentence of the quote can be interpreted as a continuation of the theme of "You're playing the game wrong, Hollerith," if as seems likely to me now, Vassar saw the purpose (or one of the purposes) of the game as coming up with a statement whose probability (as judged by the player himself) outside the context of the game is as low as possible.

Vassar is very skilled at understanding other people's points of view. Moreover, he saw his job at this time in large part as a negotiator among the singularitarians, which probably caused him to try to get even better at understanding unusual points of view. Finally, during the two years leading up to this exchange that you quote I had been spamming Overcoming Bias pretty hard with my outre system of valuing things (which by the way I have since abandoned -- I am pretty much a humanist now) so of course Vassar had had plenty of exposure to my point of view.

Have you asked Vassar what he meant by the 2 sentence you quoted?

Living in the Bay Area as I do, I have had a couple of conversations with Vassar, I applied to the visiting fellows program when Vassar was the main determiner of who got in (I did not), and I have absolutely no evidence that the above sentence means anything more than the fact that Vassar at time of the sentence's writing spent a lot time time trying to understand many different points of view -- the more different from his own, the better -- and maybe perhaps that like some other extremely bright people (Bernhard Shaw being one) he gets a kick out of pursuing lines of thought with people that despite the line's seeming absurd or monstrous at first have a certain odd or subtle integrity or have a faint ring of truth to them.

Q&A #2 with Singularity Institute Executive Director

by lukeprog 1 min read13th Dec 201148 comments


Just over a month ago I posted a call for questions about the Singularity Institute. The reaction to my video response was positive enough that I'd like to do another one — though I can't promise video this time. I think that the Singularity Institute has a lot of transparency "catching up" to do.


The Rules (same as before)

1) One question per comment (to allow voting to carry more information about people's preferences).

2) Try to be as clear and concise as possible. If your question can't be condensed into one paragraph, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

3) I will generally answer the top-voted questions, but will skip some of them. I will tend to select questions about the Singularity Institute as an organization, not about the technical details of some bit of research. You can read some of the details of the Friendly AI research program in my interview with Michael Anissimov and in Eliezer's Singularity Summit 2011 talk.

4) Please provides links to things referenced by your question.

5) This thread will be open to questions and votes for 7 days, at which time I will decide which questions to begin preparing responses to.


I might respond to certain questions within the comments thread; for example, when there is a one-word answer to the question.

You may repeat questions that I did not answer in the first round, and you may ask follow-up questions to the answers I gave in round one.