Eliezer's investment into OB/LW apparently hasn't returned even a single full-time FAI researcher...

I believe that the SIAI has has been very successful in using OB/LW to not only rise awareness of risks from AI but to lend credence to the idea. From the very beginning I admired that feat.

Eliezer Yudkowsky's homepage is a perfect example of its type. Just imagine he would have concentrated solely on spreading the idea of risks from AI and the necessity of a friendliness theory. Without any background relating to business or an academic degree, to many people he would appear to be yet another crackpot spreading prophecies of doom. But someone who is apparently well-versed in probability theory, who studied cognitive biases and tries to refine the art of rationality? Someone like that can't possible be deluded enough to hold some complex beliefs that are completely unfounded, there must be more to it.

That's probably the biggest public relations stunt in the history of marketing extraordinary ideas.

Certainly, by many metrics LW can be considered wildly successful, and my comment wasn't meant to be a criticism of Eliezer or SIAI. But if SIAI was intending to build an FAI using its own team of FAI researchers, then at least so far LW has failed to recruit them any such researchers. I'm trying to figure out if this was the expected outcome, and if not, how updating on it has changed SIAI's plans. (Or to remind them to update in case they forgot to do so.)

0JoshuaZ9yMost of your analysis seems right, but the last sentence seems likely to be off. There have been a lot of clever PR stunts in history.

Q&A with new Executive Director of Singularity Institute

by lukeprog 1 min read7th Nov 2011182 comments

26


Today I was appointed the new Executive Director of Singularity Institute.

Because I care about transparency, one of my first projects as an intern was to begin work on the organization's first Strategic Plan. I researched how to write a strategic plan, tracked down the strategic plans of similar organizations, and met with each staff member, progressively iterating the document until it was something everyone could get behind.

I quickly learned why there isn't more of this kind of thing: transparency is a lot of work! 100+ hours of work later, plus dozens of hours from others, and the strategic plan was finally finished and ratified by the board. It doesn't accomplish much by itself, but it's one important stepping stone in building an organization that is more productive, more trusted, and more likely to help solve the world's biggest problems.

I spent two months as a researcher, and was then appointed Executive Director.

In further pursuit of transparency, I'd like to answer (on video) submitted questions from the Less Wrong community just as Eliezer did two years ago.

 

The Rules

1) One question per comment (to allow voting to carry more information about people's preferences).

2) Try to be as clear and concise as possible. If your question can't be condensed into one paragraph, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

3) I will generally answer the top-voted questions, but will skip some of them. I will tend to select questions about Singularity Institute as an organization, not about the technical details of some bit of research. You can read some of the details of the Friendly AI research program in my interview with Michael Anissimov.

4) If you reference certain things that are online in your question, provide a link.

5) This thread will be open to questions and votes for 7 days, at which time I will decide which questions to begin recording video responses for.

 

I might respond to certain questions within the comments thread and not on video; for example, when there is a one-word answer.

26