As a subset of this question, do you think that establishing a school with the express purpose of training future rationalists/AGI programmers from an early age is a good idea? Don't you think that people who've been raised with strong epistemic hygiene should be building AGI rather than people who didn't acquire such hygiene until later in life?

The only reasons I can see for it not working would be:

  1. predictions that AGIs will come before the next generation of rationalists comes along. (which is also a question of how early to start such an education program).
  2. belief that our current researchers are up to the challenge. (even then, having lots of people who've had a structured education designed to produce the best FAI researchers would undeniably reduce existential risk. no?)

EDIT (for clarification): Eliezer has said:

"I think that saving the human species eventually comes down to, metaphorically speaking, nine people and a brain in a box in a basement"

Just as they would be building an intelligence greater than themselves, so to must we build human intelligences greater than ourselves.

The only reasons I can see for it not working would be: 1. predictions that AGIs will come before the next generation of rationalists comes along. (which is also a question of how early to start such an education program). 2. belief that our current researchers are up to the challenge. (even then, having lots of people who've had a structured education designed to produce the best FAI researchers would undeniably reduce existential risk. no?)

I can't speak for the SIAI, but to me this sounds like a suboptimal use of resources, and bad PR. It trips my &qu... (read more)

Q&A #2 with Singularity Institute Executive Director

by lukeprog 1 min read13th Dec 201148 comments

9


Just over a month ago I posted a call for questions about the Singularity Institute. The reaction to my video response was positive enough that I'd like to do another one — though I can't promise video this time. I think that the Singularity Institute has a lot of transparency "catching up" to do.

 

The Rules (same as before)

1) One question per comment (to allow voting to carry more information about people's preferences).

2) Try to be as clear and concise as possible. If your question can't be condensed into one paragraph, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

3) I will generally answer the top-voted questions, but will skip some of them. I will tend to select questions about the Singularity Institute as an organization, not about the technical details of some bit of research. You can read some of the details of the Friendly AI research program in my interview with Michael Anissimov and in Eliezer's Singularity Summit 2011 talk.

4) Please provides links to things referenced by your question.

5) This thread will be open to questions and votes for 7 days, at which time I will decide which questions to begin preparing responses to.

 

I might respond to certain questions within the comments thread; for example, when there is a one-word answer to the question.

You may repeat questions that I did not answer in the first round, and you may ask follow-up questions to the answers I gave in round one.