At what level of talent do you think an attempt to build an FAI would start to do more (expected) good than harm?

I'm not sure that scientific talent is the relevant variable here. More talented folk are more likely to achieve both positive and negative outcomes. I would place more weight on epistemic rationality, motivations (personality, background checks), institutional setup and culture, the strategy of first trying to get test the tractability of robust FAI theory and then advancing FAI before code (with emphasis on the more-FAI-less-AGI problems first), and similar variables.

Do you think this concern is reasonable?

Certainly it's a reasonable concern from a distance. Folk do try to estimate and reduce the risks you mentioned, and to investigate alternative non-FAI interventions. My personal sense is that these efforts have been reasonable but need to be bolstered along with the FAI research team. If it looks like a credible (to me) team may be assembled my plan would be (and has been) to monitor and influence team composition, culture, and exposure to information. In other words, I'd like to select folk ready to reevaluate as well as to make progress, and to work hard to build that culture as researchers join up.

If so, I think it would help a lot if SIAI got into the habit of making its strategic thinking more transparent.

I can't speak for everyone, but I am happy to see SIAI become more transparent in various ways. The publication of the strategic plan is part of that, and I believe Luke is keen (with encouragement from others) to increase communication and transparency in other ways.

publish the meeting minutes

This one would be a decision for the board, but I'll give my personal take again. Personally, I like the recorded GiveWell meetings and see the virtues of transparency in being more credible to observers, and in providing external incentives. However, I would also worry that signalling issues with a diverse external audience can hinder accurate discussion of important topics, e.g. frank discussions of the strengths and weaknesses of potential Summit speakers, partners, and potential hires that could cause hurt feelings and damage valuable relationships. Because of this problem I would be more wholehearted in supporting other forms of transparency, e.g. more frequent and detailed reporting on activities, financial transparency, the strategic plan, things like Luke's Q&A, etc. But I wouldn't be surprised if this happens too.

Showing 3 of 4 replies (Click to show all)

Personally, I like the recorded GiveWell meetings and see the virtues of transparency in being more credible to observers, and in providing external incentives. However, I would also worry that signalling issues with a diverse external audience can hinder accurate discussion of important topics, e.g. frank discussions of the strengths and weaknesses of potential Summit speakers, partners, and potential hires that could cause hurt feelings and damage valuable relationships. Because of this problem I would be more wholehearted in supporting other forms of t

... (read more)
19Wei_Dai9yLet's assume that all the other variables are already optimized for to minimize the risk of creating an UFAI. It seems to me that the the relationship between the ability level of the FAI team and probabilities of the possible outcomes must then look something like this: This chart isn't meant to communicate my actual estimates of the probabilities and crossover points, but just the overall shapes of the curves. Do you disagree with them? (If you want to draw your own version, click here [http://www.chartgo.com/create.do?chart=line&dimension=2d&width=400&height=300&orientation=vertical&title=&subtitle=&xtitle=ability+of+FAI+team&ytitle=probability+of+outcome&fonttypetitle=bold&fonttypelabel=normal&labelorientation=horizontal&chrtbkgndcolor=gradientblue&max_yaxis=1.0&transparency=1&legend=1&min_yaxis=0.0&roundedge=1&shadow=1&border=1&curve=1&threshold=0.0&xaxis1=low%0D%0Apretty+competent%0D%0Aworld+class%0D%0Asuperhuman&yaxis1=.1%0D%0A.2%0D%0A.4%0D%0A.3&group1=UFAI&yaxis2=0%0D%0A.05%0D%0A.4%0D%0A.6&group2=FAI&yaxis3=.9%0D%0A.75%0D%0A.2%0D%0A.1&group3=null&add=&rem=&from=linejsp&lang=en] and then click on "Modify This Chart".) Has anyone posted SIAI's estimates of those risks? That seems reasonable, and given that I'm more interested in the "strategic" as opposed to "tactical" reasoning within SIAI, I'd be happy for it to be communicated through some other means.
9wedrifid9yBasically it ensures that all serious discussion and decision making is made prior to any meeting in informal conversations so that the meeting sounds good. Such a record should be considered a work of fiction regardless of whether it is a video transcript or a typed document. (Only to the extent that the subject of the meeting matters - harmless or irrelevant things wouldn't change.) That's more like it!

Q&A with new Executive Director of Singularity Institute

by lukeprog 1 min read7th Nov 2011182 comments

26


Today I was appointed the new Executive Director of Singularity Institute.

Because I care about transparency, one of my first projects as an intern was to begin work on the organization's first Strategic Plan. I researched how to write a strategic plan, tracked down the strategic plans of similar organizations, and met with each staff member, progressively iterating the document until it was something everyone could get behind.

I quickly learned why there isn't more of this kind of thing: transparency is a lot of work! 100+ hours of work later, plus dozens of hours from others, and the strategic plan was finally finished and ratified by the board. It doesn't accomplish much by itself, but it's one important stepping stone in building an organization that is more productive, more trusted, and more likely to help solve the world's biggest problems.

I spent two months as a researcher, and was then appointed Executive Director.

In further pursuit of transparency, I'd like to answer (on video) submitted questions from the Less Wrong community just as Eliezer did two years ago.

 

The Rules

1) One question per comment (to allow voting to carry more information about people's preferences).

2) Try to be as clear and concise as possible. If your question can't be condensed into one paragraph, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

3) I will generally answer the top-voted questions, but will skip some of them. I will tend to select questions about Singularity Institute as an organization, not about the technical details of some bit of research. You can read some of the details of the Friendly AI research program in my interview with Michael Anissimov.

4) If you reference certain things that are online in your question, provide a link.

5) This thread will be open to questions and votes for 7 days, at which time I will decide which questions to begin recording video responses for.

 

I might respond to certain questions within the comments thread and not on video; for example, when there is a one-word answer.

26