Skepticism about SIAI's competence screens off skepticism about SIAI's intentions, so of course that's not the true rejection for the vast majority of people. But it genuinely troubles me if nobody's thought of the latter question at all, beyond "Trust us, we have no incentive to implement anything but CEV".

If I told you that a large government or corporation was working hard on AGI plus Friendliness content (and that they were avoiding the obvious traps), even if they claimed altruistic goals, wouldn't you worry a bit about their real plan? What features would make you more or less worried?

What features would make you more or less worried?

I'd worry about selfish institutional behavior, or explicit identification of the programmers' goals with the nation/corporation's selfish interests. Also, I guess, belief in the moral infallibility of some guru.

Otherwise I wouldn't worry about motives, not unless I thought one programmer could feasibly deceive the others and tell the AI to look only at this person's goals. Well, I have to qualify that -- if everyone in the relevant subculture agreed on moral issues and we never saw any public disagreeme... (read more)

1Vladimir_Nesov9yI think the key point is that we're not there yet. Whatever theoretical tools we shape now are either generally useful, or generally useless, irrespective of considerations of motive; currently relevant question is (potential) competence. Only at some point in the (moderately distant) future, conditional on current and future work bearing fruit, motive might become relevant.

Q&A with new Executive Director of Singularity Institute

by lukeprog 1 min read7th Nov 2011182 comments


Today I was appointed the new Executive Director of Singularity Institute.

Because I care about transparency, one of my first projects as an intern was to begin work on the organization's first Strategic Plan. I researched how to write a strategic plan, tracked down the strategic plans of similar organizations, and met with each staff member, progressively iterating the document until it was something everyone could get behind.

I quickly learned why there isn't more of this kind of thing: transparency is a lot of work! 100+ hours of work later, plus dozens of hours from others, and the strategic plan was finally finished and ratified by the board. It doesn't accomplish much by itself, but it's one important stepping stone in building an organization that is more productive, more trusted, and more likely to help solve the world's biggest problems.

I spent two months as a researcher, and was then appointed Executive Director.

In further pursuit of transparency, I'd like to answer (on video) submitted questions from the Less Wrong community just as Eliezer did two years ago.


The Rules

1) One question per comment (to allow voting to carry more information about people's preferences).

2) Try to be as clear and concise as possible. If your question can't be condensed into one paragraph, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

3) I will generally answer the top-voted questions, but will skip some of them. I will tend to select questions about Singularity Institute as an organization, not about the technical details of some bit of research. You can read some of the details of the Friendly AI research program in my interview with Michael Anissimov.

4) If you reference certain things that are online in your question, provide a link.

5) This thread will be open to questions and votes for 7 days, at which time I will decide which questions to begin recording video responses for.


I might respond to certain questions within the comments thread and not on video; for example, when there is a one-word answer.